Information

Any research on right-hand/left-hand based preferences when interacting with an interface?

Any research on right-hand/left-hand based preferences when interacting with an interface?



We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

My original question is "Left or right placement of interactive elements on a web page", but only now did I find a place where I think I can find the right people to understand it.

The gist is: is there any natural inclination in right-handed people, when reading a web page, for example, to prefer those interface elements which lead to actions (e.g. "print", "save", "get a link", etc.) to be placed on the right side of the screen and then, to prefer navigational elements (e.g. menus, especially tree-structured category menus) to be placed on the left side, while the preference for content for reading would be in the middle?

That is: navigation (reference stuff) on the left, content (passive perception stuff) in the middle, action elements on the right, all in this manner because of a potential instinct to reach for anything action related with one's right hand.

Is there any research or at least speculation on the subject?

And a follow-up question: if there is such an inclination does it appear to be reverse in left-handed people?

update: - similar question asked here: http://www.quora.com/Are-right-handers-more-likely-to-rest-their-cursor-pointer-on-the-right-hand-side-of-the-screen


There is indeed some research on handedness and user interfaces but not exactly at the level you seem to be after. Handedness matters for tablet interfaces, hand occlusion is a particular concern there.

Some references: http://hal.inria.fr/hal-00670516/en and http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.89.4546

Speculating myself a little bit, I would however be careful when drawing practical conclusions from any putative “natural” handedness-related inclination. Maybe there is such a thing and it could conceivably be measured but you need much more than that to justify any UI decision.

In particular, cultural factors (in a broad sense, including web and platform conventions) could just as well play a big role. The usability literature has always emphasized consistency and the fact that your interface is just going to be one of many interfaces with which users are regularly interacting so when in doubt always try to follow common practice, not necessarily because it is better as such but simply to avoid confusing your users. In any case, there is no reason to worry specifically about “natural” inclinations (as opposed to learned ones).


Tools that help people learn

I aspire to build systems that make it possible to deploy effective pedagogical interventions at scale (e.g., Learnersourcing project) or in contexts where such interventions would be difficult to apply without help from technology (e.g., PETALS project).

Projects

TELLab: Experimentation @ Scale to Support Experiential Learning in Social Sciences and Design

Well-conducted lecture demonstrations of natural phenomena improve students' engagement, learning and retention of knowledge. Similarly, laboratory modules that allow for genuine exploration and discovery of relevant concepts can improve learning outcomes. These pedagogical techniques are used frequently in natural sciences and engineering to teach students about phenomena in the physical world. But how might we conduct a lecture demonstration to demonstrate impact of extraneous cognitive load on performance? How might we design a lab, in which students explore how adding decorations to visualizations impacts the comprehension and memorability of visualizations? We are developing tools, content and procedures to bring experiential learning techniques to social science and design-related courses that teach concepts related to human perception, cognition and behavior. Specifically, we are working to develop software technologies to enable rapid, large-scale and ethical online human-subjects experimentation in undergraduate design-related courses. See the project web site for more.

Na Li, Krzysztof Z. Gajos, Ken Nakayama, and Ryan Enos. TELLab: An Experiential Learning Tool for Psychology. In Proceedings of the Second (2015) ACM Conference on Learning @ Scale , [email protected] '15, pages 293&ndash297, New York, NY, USA, 2015. ACM.
[Abstract, BibTeX, etc.]

Organic Peer Assessment

We are developing tools and techniques for organic peer assessment, an approach where assessment occurs as a side effect of students performing activities, which they find intrinsically motivating. Our preliminary results, obtained in the context of a flipped classroom, show that the quality of the summative assessment produced by the peers matched that of experts, and we encountered strong evidence that our peer assessment implementation had positive effects on achievement.

Steven Komarov and Krzysztof Z. Gajos. Organic Peer Assessment. In Proceedings of the CHI 2014 Learning Innovation at Scale workshop , 2014.
[Abstract, BibTeX, etc.]

Learnersourcing: Leveraging Crowds of Learners to Improve the Experience of Learning from Videos

Rich knowledge about the content of educational videos can be used to enable more effective and more enjoyable learning experiences. We are developing tools that leverage crowds of learners to collect rich meta data about educational videos as a byproduct of the learners' natural interactions with the videos. We are also developing tools and techniques that use these meta data to improve the learning experience for others.

Sarah Weir, Juho Kim, Krzysztof Z. Gajos, and Robert C. Miller. Learnersourcing Subgoal Labels for How-to Videos. In Proceedings of CSCW'15 , 2015.
[Abstract, BibTeX, etc.]

Juho Kim, Philip J. Guo, Carrie J. Cai, Shang-Wen (Daniel) Li, Krzysztof Z. Gajos, and Robert C. Miller. Data-Driven Interaction Techniques for Improving Navigation of Educational Videos. In Proceedings of UIST'14 , 2014. To appear.
[Abstract, BibTeX, Video, etc.]

Juho Kim, Phu Nguyen, Sarah Weir, Philip J Guo, Robert C Miller, and Krzysztof Z. Gajos. Crowdsourcing Step-by-Step Information Extraction to Enhance Existing How-to Videos. In Proceedings of CHI 2014 , 2014. To appear. Honorable Mention
[Abstract, BibTeX, etc.]

Juho Kim, Shang-Wen (Daniel) Li, Carrie J. Cai, Krzysztof Z. Gajos, and Robert C. Miller. Leveraging Video Interaction Data and Content Analysis to Improve Video Learning. In Proceedings of the CHI 2014 Learning Innovation at Scale workshop , 2014.
[Abstract, BibTeX, etc.]

Juho Kim, Philip J. Guo, Daniel T. Seaton, Piotr Mitros, Krzysztof Z. Gajos, and Robert C. Miller. Understanding In-Video Dropouts and Interaction Peaks in Online Lecture Videos. In Proceeding of Learning at Scale 2014 , 2014. To appear.
[Abstract, BibTeX, etc.]

Juho Kim, Robert C. Miller, and Krzysztof Z. Gajos. Learnersourcing subgoal labeling to support learning from how-to videos. In CHI '13 Extended Abstracts on Human Factors in Computing Systems , CHI EA '13, pages 685-690, New York, NY, USA, 2013. ACM.
[Abstract, BibTeX, etc.]

Ingenium: Improving Engagement and Accuracy with the Visualization of Latin for Language Learning

Learners commonly make errors in reading Latin, because they do not fully understand the impact of Latin's grammatical structure--its morphology and syntax--on a sentence's meaning. Synthesizing instructional methods used for Latin and artificial programming languages, Ingenium visualizes the logical structure of grammar by making each word into a puzzle block, whose shape and color reflect the word's morphological forms and roles. See the video to see how it works.

Sharon Zhou, Ivy J. Livingston, Mark Schiefsky, Stuart M. Shieber, and Krzysztof Z. Gajos. Ingenium: Engaging Novice Students with Latin Grammar. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems , CHI '16, pages 944-956, New York, NY, USA, 2016. ACM.
[Abstract, BibTeX, Video, etc.]

PETALS Project -- A Visual Decision Support Tool For Landmine Detection

We have built an interactive visualization system, callded PETALS, that helps novice deminers learn how to correctly identify dangerous cluster configurations of landmines. We conducted a controlled study with two experienced instructors from the Humanitarian Demining Training Center (HDTC) in Fort Leonard Wood, Missouri and 58 participants, who were put through the basic land mine detection course. Half of the participants had access to PETALS during training and half did not. During the final exam, which all participants completed without PETALS, participants who used PETALS during training were 72% less likely to make a mistake on the cluster tasks. These results are not yet published, but the available papers capture the initial development and evaluation of the PETALS system.

Lahiru Jayatilaka, David M. Sengeh, Charles Herrmann, Luca Bertuccelli, Dimitrios Antos, Barbara J. Grosz, and Krzysztof Z. Gajos. PETALS: Improving Learning of Expert Skill in Humanitarian Demining. In Proc. COMPASS '18: ACM SIGCAS Conference on Computing and Sustainable Societies , 2018. To appear.
[Abstract, BibTeX, etc.]

Lahiru G. Jayatilaka, Luca F. Bertuccelli, James Staszewski, and Krzysztof Z. Gajos. Evaluating a Pattern-Based Visual Support Approach for Humanitarian Landmine Clearance. In CHI '11: Proceeding of the annual SIGCHI conference on Human factors in computing systems , New York, NY, USA, 2011. ACM.
[Abstract, BibTeX, Authorizer, etc.]

Lahiru G. Jayatilaka, Luca F. Bertuccelli, James Staszewski, and Krzysztof Z. Gajos. PETALS: a visual interface for landmine detection. In Adjunct proceedings of the 23nd annual ACM symposium on User interface software and technology , UIST '10, pages 427-428, New York, NY, USA, 2010. ACM.
[Abstract, BibTeX, Authorizer, etc.]


Any research on right-hand/left-hand based preferences when interacting with an interface? - Psychology

Below is a list of all projects available for summer 2021. Please review these prior to applying in the application you will be asked to select your top three projects of interest.

Accelerating Innovation Through Analogy Mining

Project Description: We are looking for students for building prototype interactive systems to accelerate the rate of scientific innovations through mining analogies. Students will work with a graduate student mentor and a faculty advisor at HCII to gain hands-on experience in building interactive visualizations and developing and applying cutting edge natural language processing models. As a member of this project, you will play a key role in defining, designing, developing, and evaluating interactive visualizations and algorithms involving the challenging dataset of scientific text. You will contribute to advancing techniques for finding deep structural (analogical) relations between scientific concepts that go beyond simple keyword matching based on surface similarity. You will also contribute to scaling these techniques to millions of scientific papers to support interactive visualizations. To evaluate the visualizations, we will conduct user studies measuring how such interfaces might support scientists’ creativity.

  • Interest and experience in building Web-based interactive systems.
  • Prior knowledge in Natural Language Processing technologies is preferred but not required.
  • We use D3.js for data visualization, and Tensorflow and Apache Beam for developing and deploying machine learning models.

Research Areas in Which This Project Falls:

  • Applied Machine Learning
  • Artificial Intelligence (AI)
  • Data Visualization
  • Methods
  • Scientific Collaboration
  • User Experience (UX)

Project Description: Autonomous vehicles have the potential to be designed to increase the mobility of those who have physical, sensory, or cognitive disabilities. We are working to develop new design ideas for how to make autonomous cars best serve those with disabilities. To develop new solutions, we are working with members of the disability community and transportation advocates to understand people's mobility needs, current challenges, and hopes for the future. We are looking for student researchers interested in conducting user research, design, and prototyping around making autonomous cars accessible.

  • Experience conducting user studies
  • Qualitative research (interviews, surveys, focus groups)
  • Participatory design
  • Some experience with electronics
  • Basic experience with programming
  • Some Fabrication experience

Research Areas in Which This Project Falls:

  • Accessibility
  • Artificial Intelligence (AI)
  • Design Research
  • Inter of Things (IoT)
  • Sensors
  • Service Design
  • User Experience (UX)

Project Description: When learning to use a new piece of software, developers typically spend a large amount of time reading, understanding and trying to answer questions with online materials. To help developers keep track of important, confusing, and useful information with the intent of sharing their learning with future developers, we developed a social annotation tool, Adamite, a Google Chrome extension. For more information, you can read our recently-submitted paper: https://horvathaa.github.io/public/publications/adamite-submission-deano. . While Adamite is successful as an extension, we want you to help us refine Adamite’s current features, and design and build new features depending upon your own interests. Possible project focuses may include: extending Adamite to work in a code editor like Visual Studio Code or on a mobile device, adding new features to make annotations even easier and more beneficial to the author and for subsequent users such as using intelligent techniques to cluster related annotations, studying developers’ actual usage of annotations during a software learning task, or porting Adamite for entirely new domains beyond programming, like shopping. Working on this project may result in being an author on a publication at a prestigious conference such as CHI, UIST, or CSCW, and we are hoping to release Adamite as an open source project for general use.

  • Required Skills: Some web development experience (e.g., one completed introductory web development course) OR some design and prototyping experience (e.g., experience using Figma or Adobe Creative suite tools) OR experience performing user studies (e.g., interviews, survey design, A/B lab studies)
  • Performing qualitative data analysis (e.g., grounded theory)
  • Preferred skills: Experience using React

Research Areas in Which This Project Falls:

Project Description: Artificial intelligence (AI) systems are increasingly used to assist humans in making high-stakes decisions, such as online information curation, resume screening, mortgage lending, police surveillance, public resource allocation, and pretrial detention. While the hope is that the use of algorithms will improve societal outcomes and economic efficiency, concerns have been raised that algorithmic systems might inherit human biases from historical data, perpetuate discrimination against already vulnerable populations, and generally fail to embody a given community's important values. Recent work on algorithmic fairness has characterized the manner in which unfairness can arise at different steps along the development pipeline, produced dozens of quantitative notions of fairness, and provided methods for enforcing these notions. However, there is a significant gap between the over-simplified algorithmic objectives and the complications of real-world decision-making contexts. This project aims to close the gap by explicitly accounting for the context-specific fairness principles of actual stakeholders, their acceptable fairness-utility trade-offs, and the cognitive strengths and limitations of human decision-makers throughout the development and deployment of the algorithmic system.

To meet these goals, this project enables close human-algorithm collaborations that combine innovative machine learning methods with approaches from human-computer interaction (HCI) for eliciting feedback and preferences from human experts and stakeholders. There are three main research activities that naturally correspond to three stages of a human-in-the-loop AI system. First, the project will develop novel fairness elicitation mechanisms that will allow stakeholders to effectively express their perceptions on fairness. To go beyond the traditional approach of statistical group fairness, the investigators will formulate new fairness measures for individual fairness based on elicited feedback. Secondly, the project will develop algorithms and mechanisms to manage the trade-offs between the new fairness measures developed in the first step, and multiple existing fairness and accuracy measures. Finally, the project will develop algorithms to detect and mitigate human operators' biases, and methods that rely on human feedback to correct and de-bias existing models during the deployment of the AI system.

  • Preferred but not required: Working with the project team to plan and conduct research studies, including interviews, user studies, surveys, design workshops, or behavioral experiments.
  • Preferred but not required: Analyzing and interpreting data collected from research studies, in collaboration with other project team members.
  • Preferred but not required: Ideating and designing new tools to improve fairness in machine learning practice.

Research Areas in Which This Project Falls:

  • Applied Machine Learning
  • Artificial Intelligence (AI)
  • Social Computing
  • Societal Problems

Lead Mentor: Alexandra Ion

Project Description: We are looking to push the boundaries of mechanical metamaterials by unifying material and device. Metamaterials are advanced materials that can be designed to exhibit unusual properties and complex behavior. Their function is defined by their cell structure, i.e., their geometry. Such materials can incorporate entire mechanisms, computation, or re-configurable properties within their compliant cell structure, and have applications in product design, shape-changing interfaces, prosthetics, aerospace and many more.

In this project, we will develop design tools that allow novice users and makers to design their own complex materials and fabricate them using 3D printing or laser cutting. This may involve playfully exploring new cell designs, creating novel application examples by physical prototyping and developing open source software.

  • CS skills: software development, background in geometry, optimization, and/or simulation
  • 3D modeling basics (CAD tools, e.g., Autodesk Fusion 360 or similar)
  • Basic knowledge of classical mechanics or material science

You don’t have to cover all skills, since this will likely be a group project. We are looking for diverse teams with complementary skills.

Research Areas in Which This Project Falls:

Project Description: People love products and services that use artificial intelligence (AI) to make the product work better. Today, more and more companies are searching for ways to do this. From spam filters that save people time and attention to recommenders that make it easier to find something of interest to conversational agents that offer a more natural way to interact with a computer to fully functioning smart homes and driverless cars, AI can make things better.

Unfortunately, UX designers, the people most often asked to come up with new ideas for products and services, really struggle when trying to innovate with AI. These professionals often fail to notice the many simple ways that AI can make people’s interaction better. In addition, when they do try to envision new things, they most often generate ideas for things that cannot be built. Our work focuses on helping UX designers to become better at envisioning what AI can do and then communicating their ideas to development teams.

Our work addresses this challenge in three ways. First, we are making resources to help designers better understand AI’s capabilities and its dependency on labeled datasets. Second, we are making new design tools that scaffold designers in thinking through what interaction with a probabilistic system might be like, a system that can make inference errors. Third, we are working with professional designers working in industry to better understand their work practices and to identify the best time and place for them to use the resources and the tools.

We are looking for research assistants to help us with our research. This work will involve:
1. Designing user interfaces that make our AI resources available to designers. These include a taxonomy of AI capabilities and a collection of AI interaction design patterns.
2. Designing user interfaces for new tools that help designers recognize when they should search for opportunities to use AI to enhance their designs.
3. Conducting interviews and participating in workshops with professional designers who want to get better at working with AI.

Students working on this project will learn about AI capabilities from a UX design perspective, and they will develop resources for designers to leverage AI opportunities in their work.

  • Design Research (e.g., user interviews, design workshops, affinity diagrams)
  • User interface design (sketching and prototyping of nobel UIs)
  • Interest in Human-AI Interaction
  • Strong organizational skills, reliable, self-motivated
  • Education in data science, analytics, data mining, and or experience working with user telemetry data

Research Areas in Which This Project Falls:

  • Artificial Intelligence (AI)
  • Design Research
  • Service Design
  • Tools
  • User Experience (UX)

Project Description: Looking for a student interested in interaction design and/or AI to help with the Apprentice Learner Framework, and Smart Sheet. The goal of these two project are to build an AI system that can be used for rapid Intelligent Tutoring System authoring. The Apprentice Learner learns to solve problems via a humans' demonstrations and correctness feedback and in turn produces a set of rules that can be used as a tutoring system for students. SmartSheet aims to use AL in conjunction with handwriting recognition to make handwriting recognition capable tutoring systems built via a tablet and stylus based interface. We are also interested in hosting students interested in expanding the Apprentice Learner Framework in general, including for the purposes of cognitive modeling.

  • Requirement: Solid programming skills (mostly Python, Javascript, would also be helpful)
  • Preferred: Some interest/experience with machine learning
  • Optional: UX design/implementation skills

Research Areas in Which This Project Falls:

  • Applied Machine Learning
  • Artificial Intelligence (AI)
  • Intelligent Tutoring Systems
  • User Experience (UX)

Lead Mentor: David Lindlbauer

Project Description: Augmented Reality and Virtual Reality offer interesting platforms to re-define how users interact with the digital world. It is unclear, however, what the requirements in terms of usability and interaction are to avoid overloading users with unnecessary information. In this project, we will build on existing machine learning approaches such as saliency prediction that leverage insights into human visual perception. The goal is to create computational approaches to improve the applicability and usefulness of AR and VR systems.

  • Strong technical background
  • Some experience with 3D editors (e.g. Unity, Unreal) and 3D programming
  • Some familiarity with Computer Vision

Research Areas in Which This Project Falls:

  • Applied Machine Learning
  • Augmented Reality (AR)
  • Context-Aware Computing
  • Virtual Reality (VR)

Project Description: Vega-Lite is a high-level grammar of interactive graphics. It provides a concise, declarative JSON syntax to create an expressive range of visualizations for data analysis and presentation. It is used by thousands of data enthusiasts, ML scientists, and companies around the world. We have a number of projects around adding new features to the visualization toolkits that are going to be part of the open-source tool. Please take a look at https://docs.google.com/document/d/1fscSxSJtfkd1m027r1ONCc7O8RdZp1oGABwca2pgV_E and the issue trackers for some specific project ideas we could work on.

  • Experience with web-development (JavaScript) and Git
  • Experience with data visualization, TypeScript, D3, and Vega are a plus

Research Areas in Which This Project Falls:

Project Description: In this project you will help in the design of visual and experimental aspects of a decimal number learning game, Decimal Point (http://www.cs.cmu.edu/

bmclaren/projects/DecimalPoint/), to prepare it for new classroom studies. You will work with a professor and researchers who do learning science studies in middle school classrooms. You should have design skills, both to help with new artwork in the game but also to prepare for the classroom studies. An ideal candidate will be someone with a psychology and/or design/art background.

Research Areas in Which This Project Falls:

  • Education
  • Games
  • Learning Sciences and Technologies
  • Social Good
  • User Experience (UX)

Project Description: In this project you will write and revise code to alter an existing decimal number learning game, Decimal Point (http://www.cs.cmu.edu/

bmclaren/projects/DecimalPoint/), to prepare it for new classroom studies. You will work with a professor and researchers who do learning science studies in middle school classrooms. You’ll learn about those studies, as well as practice important professional technical skills, such as using source code repositories and engaging in good software engineering practice. Preferred, but not necessary, is that you will have skills in HTML5/CSS3/JavaScript. Familiarity with Angular or AngularJS is also desirable.

  • Programming skills (e.g., Python, Java)
  • Knowledge of HTML5/CSS3/JavaScript
  • Knowledge of Angular or AngularJS
  • A desire to work with a fun research team!

Research Areas in Which This Project Falls:

  • Education
  • Games
  • Intelligent Tutoring Systems
  • Learning Sciences and Technologies
  • Social Good
  • User Experience (UX)

Project Description: Students will learn the best when offered the timely, proper scaffoldings that correspond to their current knowledge level. With AI-based intelligent tutoring systems that can calculate students current skill mastery of certain knowledge, it is possible to offer personalized, adaptive practice based on each students’ prior knowledge. This may better prepare students up to the next-to-be-learned knowledge, reduce the time of them struggling unproductively or practicing knowledge they have already mastered, and increase their learning efficiency through more personalized, focused and targeted practice. Such adaptive practice of prior knowledge also holds the potential to bridge the knowledge gap between different students, and promote education equity. In this project, we are interested in exploring the design space of building such a system/ algorithm.

  • Preferred skills in web-based technologies development
  • Preferred skills HCI/design skills, especially related to web-applications
  • Preferred skills in AI/machine learning/data mining

Research Areas in Which This Project Falls:

  • Artificial Intelligence (AI)
  • Design Research
  • Education
  • Intelligent Tutoring Systems
  • Leaning Sciences and Technologies
  • Social Good
  • User Experience (UX)

Project Description: In research studies, we found out that Lynnette, an AI-based tutoring system for middle-school equation solving, is very effective in helping students learn. To take Lynnette to the next level, we are now working to make it more engaging, by gamification elements such as a space theme, a badge system, achievements, and a narrative context. We also added a drag-and-drop interaction format for equation solving, for variety and smooth interactions. We would like to make Lynnette even more engaging. We have a variety of ideas, including personalized dashboards, culturally-adaptive story problems, and adapting instruction to social and meta-cognitive factors (e.g., sense of belonging in classroom, self-efficacy). We are open to other ideas as well. The work involves design brainstorming, co-design with middle-school students, prototyping, and trying out prototypes with students.

  • Preferred skills: Design/HCI, prototyping, web development
  • Experience with game design, educational technology, and designing for teenagers is not required but would be considered a plus

Research Areas in Which This Project Falls:

  • Design Research
  • Education
  • Games
  • Intelligent Tutoring Systems
  • Learning Sciences and Technologies
  • User Experience (UX)

Project Description: Leveraging stimuli responsive actuators and smart materials to develop wearables and second-skin applications that sense and actuate. We look to investigate novel fabrication processes, building novel manufacturing tools, and looking into new structures and mechanisms of physical prototypes. Students are encouraged to look for a balance of impactful application and fundamental research.

  • We are looking for candidates who have hands-on design and making experiences.
  • If you have digital fabrication experiences, or hands-on craft/making projects, please send an email and share your portfolio to Prof. Lining Yao: liningy [at] andrew.cmu.edu

Research Areas in Which This Project Falls:

Project Description: Over a trillion hours per year are spent searching for and making sense of complex information. For a lot of this "sensemaking," search is just the beginning: it’s also about building up your landscape of options and criteria and keeping track of all you’ve considered and learned as you go. We've found through more than a decade of research that existing tools don't support this messy process. So we are building a tool that does.

  • Looking for undergraduates with background and interest in one or more of:
    • Design, both visual and interaction
    • UX research
    • Customer discovery and lean methods
    • Front end programming (especially with React/Firebase experience)

    Research Areas in Which This Project Falls:

    Project Description: Our research experience in K-12 classrooms equipped with AI-based tutoring systems has shown that, even though the software is coaching each student individually with adaptive guidance, teachers still play an enormous role in student learning. For example, we have seen teachers in these classrooms team up students on the fly (e.g., a student who has learned a lot already with one who is struggling), for brief one-on-one extra aid. For teachers to be most effective, however, they must be able, in real time, to see how their students are doing (struggling, disengaged, very far into covering the learning objectives,, etc.). This information will enable them to take action to aid those who need help. We are creating tools by which middle-school teachers can effectively orchestrate activity in these classrooms without taking their attention away from the students. The tools use wearable and mobile devices with artificial intelligence to turn the voluminous data generated by the software into actionable diagnostic information and recommendations for teachers. In a new project, we plan to provide tutoring software that supports both individual and collaborative problem-solving by students, to enhance the effectiveness of the spontaneous teaming up of students we have observed. We seek student interns interested in any of the many efforts needed to make these technologies work, such as testing and refining collaborative ITSs with students doing data mining to create new learning analytics that inform teachers about how their students are doing and design-based research for designing and prototyping intuitive, comprehensible visualizations for teachers, for different hardware options, and trying them out with teachers.

    • HCI/design skills (preferred)
    • AI/machine learning/data mining (preferred)
    • Web technologies (preferred)

    Research Areas in Which This Project Falls:

    • Applied Machine Learning
    • Artificial Intelligence (AI)
    • Data Visualization
    • Design Research
    • Intelligent Tutoring Systems
    • Learning Sciences and Technologies
    • User Experience (UX)

    Project Description: In this project we are developing an interactive prototype for motivational interviewing training using a human-centered design approach. Motivational Interviewing (MI) is an effective therapeutic technique to support behavior change, but training is often time consuming and its effectiveness diminishes over time.

    Students on this project will work in a team to design and develop a prototype for time effective interactive MI training useful both for initial training and refreshers. Our prototype will build on our research to identify the unique challenges nurses face in learning MI and that MI newcomers struggle with building rapport, analyzing the problem, and promoting readiness for change during therapeutic interactions.

    • Interest in interactive prototype development, interaction design, psychology and/or mental health
    • Web programming (front-end or back-end)
    • Familiarity with inVision, Sketch, or other graphic design tools
    • Familiarity with Python or Javascript

    Research Areas in Which This Project Falls:

    • Design Research
    • Education
    • Games
    • Healthcare
    • Social Computing

    Project Description: Macroinverbrates.org: The Atlas of Common Freshwater Macroinvertebrates of the Eastern United States has been successfully launched as a definitive teaching and learning collection and online guide for freshwater macroinvertebrate identification with annotated key diagnostic characters marked down to family and genus for the 150 most commonly used taxa in citizen science water quality assessment and education/ This year we are completing the design, development, usability testing, evaluation and release of a fully downloadable mobile app for both Android and iOS operating systems that includes a Aquatic Insect Field Guide, Interactive Identification key (Orders) mode, and self-practice quizzing and games feature.

    • Design and evaluation skills (interviewing, usability testing, qualitative analysis skills)
    • Copy writing and tutorial video production
    • Technical Skills: android and iOS development in React Native.

    Research Areas in Which This Project Falls:

    • Design Research
    • Education
    • Learning Sciences and Technologies
    • Scientific Collaboration

    Project Description: NoRILLA is a project based on research in Human Computer Interaction at Carnegie Mellon University. We are developing a new mixed-reality educational system bridging physical and virtual worlds to improve children's STEM learning and enjoyment in a collaborative way. It uses depth camera sensing and computer vision to detect physical objects and provide personalized immediate feedback to children as they experiment and make discoveries in their physical environment. NoRILLA has been used at many school districts, Children's Museum of Pittsburgh, Carnegie Science Center and informal play spaces like IKEA and Bright Horizons. Research with hundreds of children has shown that it improves children's learning by 5 times compared to equivalent tablet or computer games. We have recently received an NSF (Advancing Informal STEM Learning) grant in collaboration with Science and Children's Museums around the country to expand our Intelligent Science Stations/Exhibits and develop the AI technology further. Responsibilities will include taking the project further by developing new modules/games, computer vision algorithms and AI enhancements on the platform and deployment of upcoming installations, as well as participating in research activities around it.

    • The project has both software and hardware components.
    • Familiarity with computer vision, Processing/Java, game/interface development and/or robotics is a plus.

    Research Areas in Which This Project Falls:

    • Artificial Intelligence (AI)
    • Augmented Reality (AR)
    • Education
    • Games
    • Learning Sciences and Technologies
    • Societal Good

    Project Description: Past research shows that AI-based tutoring systems--software that coaches students step-by-step as they try to solve complex problems--can be very helpful aids to learning, for example in middle school and high school mathematics learning. The effectiveness of tutoring software has been shown in many different subject areas, and some have reached commercial markets and are in daily use in many schools. Inspired by these results, we have developed authoring tools that make the creation of tutoring software much easier--in many cases, it can be done entirely without programming, opening the door to a wide range of authors. Instructors in fields such as math and physics have used our tools to create tutoring software for their classes. We are now creating a new version of our authoring tools with the knowledge we have gained over the 20 years since their first release. We seek to make them intuitive, clear, efficient and readily available--without requiring complicated installation or advance instruction. We want to improve both the creation of the student-facing user interface and the specification of the tutoring knowledge and behavior behind it. We want to support teams of teachers collaboratively creating these self-coaching, self-grading online activities that they might assign instead of conventional homework.

    • The work involves user-centered design of authoring tools, web programming, building tool prototypes, and trying them out with tutor authors of various background.

    Research Areas in Which This Project Falls:

    • Artificial Intelligence (AI)
    • Education
    • End-User Programming
    • Intelligent Tutoring Systems
    • Learning Sciences and Technologies
    • User Experience (UX)

    Project Description: A key trend emerging from the popularity of smart, mobile devices is the quantified-self movement. The movement has manifested into the prevalence of two kinds of personal wellness devices: (1) fitness devices (e.g., FitBit), and (2) portable and connected medical devices (e.g., Bluetooth-enabled blood pressure cuffs). The fitness devices are seamless, very portable, but offer low-fidelity information to the user. They do not generate any medically-relevant data. We are currently working on building personal medical devices that are as seamless to use as a FitBit but generate medically-relevant data. We are looking for students to contribute to various aspects of this project. Depending on their interest, the students can help in building and prototyping the mobile app, hardware device, or they can contribute to the signal processing and machine learning component.

    Some subset of these skills would be useful:

    • Programming experience in Java and/or Python
    • Mobile programming
    • Machine learning
    • Hardware prototyping

    Research Areas in Which This Project Falls:

    • Applied Machine Learning
    • Healthcare
    • Internet of Things (IoT)
    • Sensors
    • Societal Problems
    • Wearables

    Project Description: While privacy is an important element for smart home products, it takes a great deal of effort to design and develop features to support end-user privacy. These individually built features result in distributed and non-consistent interfaces, further imposing challenges for user privacy management. Peekaboo is a new IoT app development framework that aims to make it easier for developers to build privacy-sensitive smart home apps through the intermediary of a smart home hub, while simultaneously offering architectural support for building centralized and consistent privacy features across all the apps. At the heart of Peekaboo is an architecture that allows developers to factor out common data preprocessing functions from the cloud service side onto a user-controlled hub and supports these functions through a fixed set of reusable, chainable, open-source operators. These operators then become the common structure of all the Peekaboo apps. Privacy features built on these operators become native features in the Peekaboo ecosystem.

    Research Areas in Which This Project Falls:

    • End-User Programming
    • Internet of Things (IoT)
    • Security and Privacy
    • Sensors

    Project Description: Personalized Learning² (PL²) is an initiative addressing the opportunity gap for marginalized students through personal mentoring and tutoring with artificial intelligence-powered learning software. (personalizedlearning2.org)

    • The student researcher will contribute to designs that enhance the general usability of the PL² web application
    • The student researcher will need to draw on their UI/UX and Communication Design skills and prior experience to create mock-ups to be presented for team approval
    • Ideally the candidate will also be able to conduct user interviews to determine their needs and contribute to the creation of demo videos
    • Front-end programming experience (HTML, CSS, Javascript) is a nice-to-have but not required

    Research Areas in Which This Project Falls:

    Project Description: Multiple positions are open for summer undergraduate research assistants on an NSF-funded project Smart Spaces for Making: Networked Physical Tools to Support Process Documentation and Learning. Candidates are sought for two roles:1.Design Research(skills: conducting interviews, qualitative research analysis)2.Technical Prototype Development(skills: hardware prototyping, server-side programming) Research assistants will work with an interdisciplinary team of design researchers, technology developers and learning scientists from CMU’s HCII, School of Architecture and the University of Pittsburgh’s Learning Research and Development Center (LRDC) to develop smart documentation tools to support learning practices in creative, maker-based studio environments. Students will participate in design research and development activities working with project site partners including Quaker Valley High School, CMU’s IDeATe program, and AlphaLab Gear’s Startable Youth program. For more details on the project: https://smartmakingtools.weebly.com

    • Design Research (skills: contextual inquiry, interviewing, design probes and qualitative research analysis)
    • Technical Prototype Development(skills: hardware prototyping, server-side programming)

    Research Areas in Which This Project Falls:

    • Education
    • Internet of Things (IoT)
    • Learning Sciences and Technologies
    • Methods

    Project Description: This project leverages theory from psychology and known social influence principles to improve cybersecurity behavior and enhance security tool adoption. Students on this project will work on designing and developing web-based security-related mini games or interventions that incorporate cybersecurity training and information into people’s everyday workflows. They may also have an opportunity to conduct evaluations of these mini games or interventions.

    • Interest in interaction design, psychology and/or cybersecurity
    • Web programming (front-end or back-end)
    • Familiarity with inVision, Sketch, or other graphic design tools
    • Familiarity with Python or Javascript

    Research Areas in Which This Project Falls:

    • Design Research
    • Games
    • Security and Privacy
    • Social Computing
    • Societal Problems

    Project Description: This project focuses on creating a culturally-responsive programming curriculum for underrepresented middle schoolers of color in a coding camp. REU students will participate in creating, refining, and assessing a programming curriculum. We look forward to working with students who would like to pursue this area of work. It is favorable if REU students have: an excitement to promote diversity in and accessibility to computing, a strong interest in and/or prior experience in data analysis, motivation and time management skills, and some experience in programming.

    Research Areas in Which This Project Falls:

    • Education
    • Human-Robot Interaction (HRI)
    • Learning Sciences and Technologies
    • Social Good

    Project Description: In this project, we explore how social robots can be used in out-of-school learning environments to empower Black, Latinx, and Native American middle school girls in computer science. We are using a culturally-responsive computing paradigm to examine how to reflect the learner's identity in the robot, towards the goal of improving learning outcomes, and self-efficacy in computer science. We are looking for motivated REU students, interested in exploring methods of endowing the robot with social characteristics and building rapport between the robot and the learner. REU students will work on data collection and/or analysis, and interaction design. Qualified students should have experience in programming/computer science as well as design/HCI or human-robot interaction.

    Research Areas in Which This Project Falls:

    • Design Research
    • Education
    • Human-Robot Interaction (HRI)
    • Learning Sciences and Technologies
    • Social Good

    Project Description: Research on the human factors of cybersecurity often treats people as isolated individuals rather than as social actors within a web of relationships and social influences. We are developing a better understanding of social influences on security and privacy behaviors across contexts. Students on this project will conduct interviews and surveys on people's cybersecurity behaviors in different types of relationships. They may also have the opportunity to develop and prototype new system design ideas based on research findings.

    • Coursework in psychology, design or cybersecurity
    • Interest in developing interviewing and survey design skills
    • Experience with statistical analysis techniques or packages such as R
    • Interest in interaction design

    Research Areas in Which This Project Falls:

    • Design Research
    • Security and Privacy
    • Social Computing
    • User Experience (UX)

    Project Description: This project (SEME) hopes to design a feasible and sustainable chatbot to mentor teachers in order to support them in their implementation of teacher training methods in rural Ivory Coast. We will be building the chatbot on Facebook and deploying it at scale for the Fall of 2021. (Details: https://seme-cmu.github.io/seme-web/). We are looking for REU students interested to work on the design, data analysis, and development aspects of the project.

    Research Areas in Which This Project Falls:

    • Developing World
    • Learning Sciences and Technologies
    • Social Good
    • Societal Problems

    Project Description: Today, algorithmic decision-support systems (ADS) guide human decisions across a growing range of high-stakes settings – from predictive risk models used to inform child welfare screening decisions, to AI-based classroom tools used to guide instructional decisions, to data-driven decision aids used to guide mental health treatment decisions. While ADS hold great potential to foster more effective and equitable decision-making, in practice these systems often fail to improve, and may even degrade decision quality. Human decision-makers are often either too skeptical of useful algorithmic recommendations (resulting in under-use of decision support) or too reliant upon erroneous or harmfully biased recommendations (adhering to algorithmic recommendations, while discounting other relevant knowledge). To date, scientific and design knowledge remains scarce regarding how to foster appropriate levels of trust and productive forms of human discretion around algorithmic recommendations.

    In this project, we will investigate how to support more responsible and effective use of ADS in real-world contexts where these systems are already impacting the lives of children and families (e.g., mental healthcare, child welfare, and K-12 education). We will create new interfaces and training materials to help both practitioners and affected populations in these contexts decide: (1) when and how much to adhere to algorithmic recommendations, and (2) how to act upon or communicate (dis)trust. In addition, we will conduct experiments to evaluate the impacts different interface and training designs have on human–algorithm decision-making.

    • Experience conducting research with human subjects (e.g., user studies, design workshops, field studies, experiments) is preferred, but not required.
    • Experience with front-end design and/or development is a preferred, but not required.
    • Interests and/or background in any of the following areas are a plus: design, cognitive science, anthropology, decision science, statistics, artificial intelligence, machine learning, psychology, learning sciences.

    Research Areas in Which This Project Falls:

    • Applied Machine Learning
    • Artificial Intelligence (AI)
    • Data Visualization
    • Design Research
    • Education
    • Ethics
    • Healthcare
    • Learning Sciences and Technologies
    • Social Good
    • Societal Problems

    Project Description: This project is seeking an undergraduate researcher with skills in HCI, design research, and/or social computing to conduct research on the impact of automation in the hospitality industry. This role is primarily dedicated to study design, data collection, and project management related to research on the Future of Work. This collaboration creates a unique opportunity for innovation focused on identifying challenges hospitality workers experience due to emerging technology, defining the pipeline of innovations in hospitality, evaluating workforce issues including training challenges and deficiencies, and examining policy questions and options.


    Interactions

    The senses we call upon when interacting with technology are restricted. We mostly rely on vision and hearing, and increasingly touch, but taste and smell remain largely unused. Although our knowledge about sensory systems and devices has grown rapidly over the past few decades, there is still an unmet challenge in understanding people's multisensory experiences in HCI. The goal is that by understanding the ways in which our senses process information and how they relate to one another, it will be possible to create richer experiences for human-technology interactions.

    To meet this challenge, we need specific actions within the HCI community. First, we must determine which tactile, gustatory, and olfactory experiences we can design for, and how to meaningfully stimulate them in technology interactions. Second, we need to build on previous frameworks for multisensory design while also creating new ones. Third, we need to design interfaces that allow the stimulation of unexplored sensory inputs (e.g., digital smell), as well as interfaces that take into account the relationships between the senses (e.g., integration of taste and smell into flavor). Finally, it is vital to understand what limitations come into play when users need to monitor information from more than one sense simultaneously.

    Thinking Beyond Audiovisual Interfaces

    Though much development is needed, in recent years we have witnessed progress in multisensory experiences involving touch. It is key for HCI to leverage the full range of tactile sensations (vibrations, pressure, force, balance, heat, coolness/wetness, electric shocks, pain and itch, etc.), taking into account the active and passive modes of touch and its integration with the other senses. This will undoubtedly provide new tools for interactive experience design and will help to uncover the fine granularity of sensory stimulation and emotional responses.

    Moreover, both psychologists and neuroscientists have advanced the field of multisensory perception over recent decades. For example, they have provided crucial insights on the multisensory interactions that give rise to the psychological "flavor sense" [1]. The development of taste and smell interfaces, and subsequently flavor interfaces, is still in its infancy much work will be required to create multisensory-based systems that are both meaningful to people and scalable. Nevertheless, technology is advancing rapidly, including some one-off designs such as LOLLio [2], MetaCookie+ [3], and Tongue Mounted Digital Taste Interface/Taste+ [4] (Figure 1).

    Taste+ is an example of how multisensory interaction could improve dining experiences (which, by definition, are multisensorial [1]). The user can augment the flavors of food and beverages by applying weak and controlled electrical pulses on their tongue using electronically enhanced everyday utensils such as spoons and beverage bottles. The initial experimental results show that users perceive virtual salty and sour sensations.

    Moving Toward the Chemical Senses

    Here we want to highlight that there are opportunities to enhance designers' and developers' abilities to create meaningful interactions and make use of the whole spectrum of sensory experiences. However, there are still many challenges when studying taste and particularly smell, especially related to inter-subject variability, varying olfactory preferences over time, and cross-sensory influences. No other sensory modality makes as direct and intense contact with the neural substrates of emotion and memory, which may explain why smell-evoked memories are often emotionally potent.

    Smell and taste are known as the chemical senses because they rely on chemical transduction. We do not yet know entirely how to digitize these senses in the HCI context compared with others like sound and light, where we can measure frequency ranges and convert them into a digital medium (bits).

    As a community, we need to explore and develop design methods and frameworks that provide both quantitative and qualitative parameters for sensory stimulation. In the case of touch, the process is well facilitated through the proliferation of haptic technologies (from contact to contactless devices), but we are still in the early stages of development for taste and smell. However, we are now ahead of the technological development due to the rich understanding achieved by psychology and neuroscience. We thus have the opportunity to shape the development of future taste- and smell-based technologies (Figure 2) [3]. A basic understanding of how these chemical senses could be characterized from an HCI design perspective can be established.

    For instance, Obrist et al. [5] investigated the characteristics of the five basic taste experiences (sweet, salty, bitter, sour, and umami) and suggested a design framework. This framework describes the characteristics of taste experiences across all five tastes, along three themes: temporality, affective reactions, and embodiment. Particularities of each individual taste are highlighted in order to elucidate the potential design qualities of single tastes (Figure 3). For example, sweet sensations can be used to stimulate and enhance positive experiences, though on a limited timescale, as the sweetness quickly disappears, leaving one unsatisfied. It's a pleasant taste but one that is tinged with a bittersweet ending. In contrast to the sweet taste, the sour taste is described as short-lived, often coming as a surprise due to its explosive and punchy character. This taste overwhelms with its rapid appearance and rapid decay. It leaves one with the feeling that something is missing.

    How is This Information Useful for HCI?

    LOLLio, the taste-based game device, currently uses sweet and sour for positive and negative stimulation during game play. We suggest that our framework could improve such games by providing them with fine-grain insights on the specific characteristics of taste experiences that could be integrated into the game play. For example, when a person moves between related levels of a game, a continuing taste like bitter or salty is useful based on the specific characteristics of those tastes. Whereas when a user is moving to distinct levels or is performing a side challenge, an explosive taste like sour, sweet, or umami might be more suitable. The designer can adjust specific tastes in each category to create different affective reactions and a sense of agency.

    There are already a number of suggestions from the context of multisensory product design. For example, Michael Haverkamp [6] has put forward a framework for synesthetic design. The idea here is to achieve "the optimal figuration of objects based upon the systematic connections between the senses." For that purpose, Haverkamp suggests that designers need to take into account different levels of interconnections between the senses, such as the relations between (abstract) sensory features in different modalities (e.g., visual shape and taste qualities) or semantic associations (e.g., as a function of a common identity or meaning) that can for instance be exploited in a multimedia context (Figure 4).

    Directions for Future Research

    Based on multisensory experience research, it is possible to think of a variety of directions for the future. For example, the research on taste experiences presented here can be discussed with respect to their relevance for design, building on existing psychological theories on information processing (e.g., rational and intuitive thinking). The dual process theory, for instance, accounts for two styles of processing in humans: the intuition-based System 1 with associative reasoning that is fast and automatic with strong emotional bonds and reasoning based on System 2, which is slower and more volatile, being influenced by conscious judgments and attitudes. That said, the rapidity of the sour taste experience does not leave enough time for System 1 to engage with it and triggers System 2 to reflect on what just happened. Such reactions, when carefully timed, can prime users to be more rational in their thinking during a productivity task (e.g., to awaken someone who may be stuck in a loop). Moreover, an appropriately presented taste can create a synchronic experience that can lead to stronger cognitive ease (to make intuitive decisions) or reduce the cognitive ease to encourage rational thinking. Note, of course, that taste inputs will generally be utilized with other sensory inputs (e.g., visual) and thus the alignment or misalignment, or congruency, of the different inputs (in terms of processing style, emotions, identity, or so on), can result in different outcomes (positive or negative).

    Research of this kind could allow designers and developers to meaningfully harness touch, taste, and smell in HCI and open up new ways of talking about the sense of taste and related experiences. People often say things like "I like it. It is sweet," but the underlying properties of specific and often complex experiences in HCI remain silent and consequently inaccessible to designers. Therefore, having a framework that includes more fine-grain descriptions such as "it lingers" and "it is like being punched in the face," which have specific experiential correlates, can lead to the creation of a richer vocabulary for designers and can evoke interesting discussions around interaction design.

    Furthermore, it is crucial to determine the meaningful design space for multisensory interactive experiences. For example, we rarely experience the sense of taste in isolation. Perhaps, aiming for the psychological flavor sense would be a way to go, as we combine taste, olfactory, and trigeminal/oral-somatosensory inputs in our everyday life whenever we eat or drink. Here, it is key to think about congruency and its ability to produce different reactions in the user. At the same time, it is also key to understand the unique properties of each sensory modality before designing for their sensory integration in the design of interactive systems.

    Studying these underexploited senses not only enhances the design space of multisensory HCI but also helps to improve the fundamental understanding of these senses along with their cross-sensory associations.

    1. Spence, C. Multisensory flavor perception. Cell 161, 1 (2015), 24–35.

    2. Murer, M., Aslan, I., Tscheligi, M. LOLLio: Exploring taste as playful modality. Proc. of TEI 2013. 299–302.

    3. Narumi, T., Nishizaka, S., Kajinami, T., Tanikawa, T., and Hirose, M. Augmented reality flavors: Gustatory display based on edible marker and cross-modal interaction. Proc. of CHI 2011. 93–102.

    4. Ranasinghe, N., Karunanayaka, K., Cheok, A.D., Fernando, O.N.N., Nii, H., and Gopalakrishnakone, P. Digital taste and smell communication. Proc. of the 6th International Conference on Body Area Networks. ICST, 2011, 78–84.

    5. Obrist, M., Comber, R., Subramanian, S., Piqueras-Fiszman, B., Velasco, C., and Spence, C. Temporal, affective, and embodied characteristics of taste experiences: A framework for design. Proc. of CHI 2014. 2853–2862.

    6. Haverkamp, M. Synesthetic Design: Handbook for a Multi-sensory Approach. Birkhäuser Verlag, Basel, 2013.

    7. Burgess, M. We got sprayed in the face by a 9D television. Wired (May 20, 2016) http://www.wired.co.uk/article/9d-television-touch-smell-taste

    Marianna Obrist is a reader in interaction design at the University of Sussex, U.K., and head of the Sussex Computer Human Interaction (SCHI "Sky") Lab (http://www.multisensory.info/). Her research focuses on the systematic exploration of touch, taste, and smell experiences as future interaction modalities. [email protected]

    Carlos Velasco (http://carlosvelasco.co.uk/) is a member of the Crossmodal Research Laboratory, University of Oxford, U.K., and a postdoctoral research fellow at the Imagineering Institute, Iskandar, Malaysia. His research focuses on crossmodal perception and its applications. [email protected]

    Chi Thanh Vi is a postdoctoral research fellow at the SCHI Lab at the University of Sussex. He is interested in using different brain-sensing methods to understand the neural basis of user states, and the effect of taste on decision-making behavior. [email protected]

    Nimesha Ranasinghe (http://nimesha.info) is a research fellow at the National University of Singapore. His research interests include digital multisensory interactions (taste and smell), wearable computing, and HCI. During his Ph.D. studies he invented virtual taste technology. [email protected]

    Ali Israr is a senior research engineer at Disney Research, Pittsburgh, USA. He is exploring the role of haptics in multimodal and multisensory settings such as VR/AR, wearables, and handhelds, and in gaming. [email protected]

    Adrian David Cheok (http://adriancheok.info) is director of the Imagineering Institute, Iskandar, Malaysia, and a chair professor of pervasive computing at City University London. His research focuses on mixed reality, HCI, wearable computers and ubiquitous computing, fuzzy systems, embedded systems, and power electronics. [email protected]

    Charles Spence (http://www.psy.ox.ac.uk/team/charles-spence) is the head of the Crossmodal Research Laboratory, University of Oxford, U.K. His research focuses on how a better understanding of the human mind will lead to the better design of multisensory foods, products, interfaces, and environments. [email protected]

    Ponnampalam Gopalakrishnakone is professor emeritus in anatomy at the Yong Lin School of Medicine, National University of Singapore, and chairman of the Venom and Toxin Research Programme at the National University of Singapore. [email protected]

    ©2016 ACM 1072-5220/16/09 $15.00

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

    The Digital Library is published by the Association for Computing Machinery. Copyright © 2016 ACM, Inc.


    Introduction

    During the last century, research has been increasingly drawn toward understanding the human–nature relationship (1, 2) and has revealed the many ways humans are linked with the natural environment (3). Some examples of these include humans’ preference for scenes dominated by natural elements (4), the sustainability of natural resources (5, 6), and the health benefits associated with engaging with nature (7𠄹).

    Of these examples, the impacts of the human–nature relationship on people’s health have grown with interest as evidence for a connection accumulates in research literature (10). Such connection has underpinned a host of theoretical and empirical research in fields, which until now have largely remained as separate entities.

    Since the late nineteenth century a number of descriptive models have attempted to encapsulate the dimensions of human and ecosystem health as well as their interrelationships. These include the Environment of Health (11), the Mandala of Health (12), the Wheel of Fundamental Human Needs (13), the Healthy Communities (14), the One Health (15), and the bioecological systems theory (16). Each, however, have not fully incorporated all relevant dimensions, balancing between the biological, social, and spatial perspectives (17, 18). In part this is due to the challenges of the already complex research base in relation to its concept, evidence base, measurement, and strategic framework. Further attention to the complexities of these aspects, interlinkages, processes, and relations is required for a deeper sense of understanding and causal directions to be identified (19).

    This article reviews the interconnectivities between the human–nature relationship and human health. It begins by reviewing the each of their concepts and methodological approaches. These concepts will be converged to identify areas of overlap as well as existing research on the potential health impacts in relation to humanity’s degree of relationship to nature and lifestyle choices. From this, a developing conceptual model is proposed, to be inclusive of the human-centered perspective of health, viewing animals and the wider environment within the context of their relationship to humans. The model combines theoretical concepts and methodological approaches from those research fields examined in this review, to facilitate a deeper understanding of the intricacies involved for improving human health.


    How to constrain theory to match the interactionally relevant facts

    To test theories in an ecologically valid way, it is important to distinguish between the facts available to participants in the context of an interaction and those that may become available to researchers in the context of analysis. A related distinction is often made in the philosophy of science between ‘contexts of discovery’ and ‘contexts of justification’ (Schickore & Steinle, 2006), although there are many field-specific interpretations and applications of this distinction (Hoyningen-Huene, 2006). In the context of a broader project to improve research reproducibility, Nosek et al. (2018) suggest this distinction is equivalent to the differences between hypothesis-generation and hypothesis-testing, inductive and deductive methods, or exploratory and confirmatory research. However, here we argue that a particular interpretation of this distinction should be used in the field of human interaction research, and suggest that this interpretation is especially useful for constraining the process of theorizing in ways that can improve ecological validity.

    Distinguish between contexts of discovery from contexts of justification

    The ‘context of discovery’ is the situation in which a phenomenon of interest is first encountered. For example, when studying human interaction, a useful context of discovery would be an everyday conversation that happened to be recorded for analysis (Potter, 2002). ‘Contexts of justification’, in this example, might include the lab meeting, the conference discussion, and the academic literature within which the empirical details are reported, analyzed, and formulated as a scientific discovery (Bjelic & Lynch, 1992). Table 1 lists some resources for making sense of an interaction that either participants or analysts can use when discovering and justifying interactional phenomena. The third column shows some interactional resources that are available from both perspectives. For example, both participants and overlooking analysts can use observable features of the setting and the visible actions of the people within it to discover new phenomena. Both participants and analysts can also detect when these actions are produced smoothly, contiguously, and without uninterruption (Sacks, 1987). Both can see if certain actions are routinely matched into patterns of paired or ‘adjacent’ initiations and responses (Heritage, 1984, p. 256). Similarly, both analysts and participants can observe when flows of initiation and response seem to break down, falter, or require ‘repair’ to re-establish orderliness and ongoing interaction (Schegloff, Jefferson, & Sacks, 1977). By contrast, many other resources and methods for making sense of the situation are exclusively available from one perspective or the other. For example, analysts can repeatedly listen to a recording, slow it down, speed it up, and can precisely measure, quantify, and deduce cumulative facts that would be inaccessible to participants in the interaction. Similarly, participants may draw on tacit knowledge and use introspection—options which are not necessarily available for overlooking analysts—to make sense of the current state and likely outcomes of the interaction. The risk of ignoring these distinctions is that theories about how people make sense of social interaction can easily become uncoupled from empirical evidence about what the participants themselves treat as meaningful through their behavior in the situation (Garfinkel, 1964 Lynch, 2012).

    Participants’, analysts’ and shared resources between contexts of discovery and justification.

    Context . Participants’ resources . Analysts’ resources . Shared resources .
    DiscoveryKnowledge & experience beyond current interaction Ability to fast forward, rewind, & replay interactions Observable social actions & settings
    JustificationIntrospection, inductive reasoning Quantification & deductive analysis Sequential organization of talk & social action
    Context . Participants’ resources . Analysts’ resources . Shared resources .
    DiscoveryKnowledge & experience beyond current interaction Ability to fast forward, rewind, & replay interactions Observable social actions & settings
    JustificationIntrospection, inductive reasoning Quantification & deductive analysis Sequential organization of talk & social action

    Consider participants’ situational imperatives

    In order to ecologically ground theories in the context of interaction, we should constrain our theorizing to take account of what can be tested using the different kinds of evidence and methods available to both analysts and participants. Analysts should try to harness as many resources from the participants’ ‘context of discovery’ as possible, but it is also important that they take into account how the participants’ involvement in the situation is motivated by entirely different concerns. The drinker and the bartender do not usually go to a bar to provide causal explanations for interactional phenomena discovered in that setting for the benefit of scientific research. Their actions are mobilized by the mutual accountability of one person ordering a drink and the other person pouring it. As (Garfinkel, 1967, p. 38) demonstrates, failure to fulfill mutually accountable social roles can threaten to ‘breach’ the mutual intelligibility of the situation itself. Bartenders who fail to recognize the behavior of thirsty customers risk appearing inattentive or unprofessional. In an extreme case, failing to behave as a bartender may lead to getting fired and actually ceasing to be one. Similarly, customers who fail to exhibit behaviors recognizable as ordering a drink risk remaining unserved or, in an extreme case, being kicked out of the bar. If neither participant upholds their interactional roles, the entire situation risks becoming unrecognizable as the jointly upheld ordinary activity of ‘being in a bar’ (Sacks, 1984b). Interactional situations have this reflexive structure: they depend on participants behaving in certain ways in order to make the situation recognizable as the kind of situation where that kind of behavior is warranted. This makes it especially important to ground theories about interaction with reference to the resources and methods that are accessible to participants in the situation, and to take account of participants’ situational imperatives.

    Focus on reciprocal interactional behaviors

    Theories about interaction, then, should focus on whatever people in a given interactional situation discover and treat as relevant through their actions. For participants in an interaction what counts as a ‘discovery’ is any action that they, in conversation analytic terms, observably orient towards and treat as relevant in the situation. Justification in the participants’ terms, then, consists of doing the necessary interactional ‘work’ to demonstrate their understanding and make themselves understood to others (Sacks, 1995, p. 252). When people interact they display their understandings and uphold the intelligibility and rationale of their actions (Hindmarsh, Reynolds, & Dunne, 2011). This reflexive process upholds the intelligibility of the social situation they’re currently involved in: an imperative that Garfinkel (1967) describes as ‘mutual accountability’. In our bar example, prospective drinkers and bartenders monitor one another’s behavior and discover, respectively, who is going to serve a drink, and who needs one. The resources they may rely on in order to make these discoveries include their bodily positions, head and gaze orientation, speech, and gesture. Each participant may also rely on cultural knowledge and prior experience of this kind of situation. However, these tacit resources are not directly accessible—neither to the other participants, nor to the overlooking analysts. Similarly, analysts could code and quantify any visible bodily movements then use statistical methods for ‘exploratory data analysis’ (Jebb, Parrigon, & Woo, 2016) to develop a theory. This could be very misleading and ecologically invalid though, since this form of analysis is not something participants could to use as a resource to make sense of the situation, and it doesn’t necessarily take account of their displays of mutual observation and accountability. Theories about behavior in bars, therefore, should start by trying to explain this situation using only resources that are mutually accessible to participants and analysts. These resources could include any reciprocal interactional behaviors such as how drink-offerings and drink-requests are linked in closely timed sequences of social interaction.


    Woebot: A Professional Review

    Reading Time: 6 minutes Colleen Stiles-Shields, PhD, is a licensed clinical psychologist and Assistant Professor in the Department of Psychiatry and Behavioral Sciences at Rush University Medical Center. Prior to her training in Clinical Psychology, Dr. Stiles-Shields was trained in Social Work and was a Licensed Clinical Social Worker (LCSW). Dr. Stiles-Shields’ work focuses on using technology as a delivery mechanism for behavioral health interventions, particularly for pediatric and underserved populations. She has received funding to develop and evaluate mobile apps for depression and anxiety from the National Institute of Mental Health and published over 40 scientific papers. She completed her PhD at Northwestern University’s Feinberg School of Medicine as a member of the Center for Behavioral Intervention Technologies (CBITs).

    Product Description

    Woebot is a fully automated conversational agent (i.e., think along the lines of Siri [Apple] or Alexa [Amazon] interacting with someone via text). Marketed as “your friendly self-care expert,” Woebot is accessible via mobile devices through an instant messenger app (iOS or Android computer access appears to only be available with membership). The app features lessons (via texts and “stories”), interactive exercises, and videos grounded in Cognitive Behavioral Therapy (CBT). Interactions with Woebot are designed to be brief (1-2 minutes). Longer interactions are typically driven by users opting for more information (e.g., selecting a response type that elicits more interactions) or following a prompt from Woebot that a selected activity may take some time (e.g., “It’ll take 10 minutes, is that ok?”). With its use of typical media seen in a texting environment (e.g., brief messages, gifs, emojis, acronyms), those who frequently communicate this way will navigate this app well, while also engaging in evidence-based skills and strategies for stress and wellness.

    Recommendations for Use

    Woebot is geared towards young adults experiencing stress and wellness difficulties (like trouble sleeping). Indeed, middle aged and older adults might not follow all of the language Woebot uses or provides as possible user responses. For example, if you would get lost after being asked about your “boo,” this probably isn’t a great fit for you. The app is rated “T” for Teen by the Google Play Store, and does seem to have appropriate content for adolescents. However, the empirical support for Woebot stems from a randomized controlled trial which included young adults only (Fitzpatrick, Darcy & Vierhile, 2017). The app tries to engage a user each day, with push notifications ranging from: encouragement for previously completing a task, to checking in about mood, to making a joke (e.g., “I flossed my grills today…”). Nearly daily use is likely a good goal to aim for, particularly at the start of using Woebot. This goal is easy to achieve, as brief check ins and interactions are simple. Those dealing with mild stresses or feeling a little blue are likely to benefit from using Woebot on their own. Those with more serious concerns (like having a mental health clinician tell you that you have Depression or Anxiety) might benefit from using Woebot while also seeing a therapist, or after learning and practicing some of these skills with a therapist first (sort of like using the app to get a tune up or a way to keep practicing what you have learned). As Woebot points out, it acts like a robot and cannot always accurately respond to what it is told. For some skills, like thought challenging (a cognitive skill that is part of the “C” of CBT), Woebot cannot always catch errors in completing this skill and a person could get confused on how to do this correctly without the help of a human (e.g., therapist).

    Content

    The content of the app reflects the core skills of Cognitive Behavioral Therapy (CBT), which has a strong evidence base for a variety conditions when delivered face-to-face ( Hofmann, Asnaani, Vonk, Sawyer, & Fang, 2012) and via technological platforms (e.g., a computer or mobile device Ebert et al., 2015 Spek, Cuijpers, Nyklíček, Riper, Keyzer, & Pop, 2007). The app also includes practicing skills around gratitude and mindfulness. The app was evaluated in a randomized controlled trial in comparison to an “informational control condition” (i.e., those randomized to this condition were given access to an eBook with evidence-based information about depression for college students National Institute of Mental Health, 2017). The young adults in the active arm of the trial used Woebot nearly every day for two weeks, and as a group, had lower depression scores after these two weeks (Fitzpatrick et al., 2017). Woebot even shares these findings in early interactions with a user: “But my data from my Stanford study shows that most people start to get the hang of things and feel better at 2 weeks.” Of note from this research: the participants were young adults (no one under 18 years of age) and were fairly homogenous from a demographic standpoint (Fitzpatrick et al., 2017). How well these findings might translate to younger and older users (like teens or middle-aged adults) as well as minority and/or special needs groups (e.g., users with cognitive impairments or English as a second language) is unclear.

    Information is presented in varied formats, including: brief text (i.e., “stories” delivered as if being sent via brief text messages) and videos (e.g., Carole Dweck presenting the idea of “Mindsets” in a ten-minute video or a two-minute “What is Mindfulness” video). The information is incredibly concise, especially compared to how long therapeutic explanations of CBT concepts can get. Further, a user can often decide if they want more explanation (e.g., selecting “Got it” vs. “Tell me more” when presented with new information). Also of note, the majority of interactions with the app are layered in empathetic and validating statements. The app also utilizes the user’s name frequently (making statements feel more personal) and often “forces” users to offer praise to themselves after completing tasks on the app. For example, in response to the app sending the message: “Magic! You’re doing great,” the only response option the app may provide is “I sure am!”

    Woebot also includes frequent mood tracking. This mood tracking is varied, including: 1) occasional administration of Patient Health Questionnaire items (i.e., a set of questions measuring depressive symptoms that people are likely to answer in their doctor’s office Kroenke, Spitzer, & Williams, 2001) 2) asking for quantitative ratings around anxiety (e.g., rate your anxiety from 0 to 5) and 3) selecting emotions to characterize how one is feeling on a given day (e.g., happy, sad, content). This last option is graphed for users to see “your mood over time.” However, when viewing mood over time, moods are represented on a graph in a way that implies that certain moods have different quantitative values. For example “tired” is represented on the graph as being at a lower level (i.e. “worse”) than “okay”, which may not match how a user would rate their mood. Finally and consistent with CBT practice, the app also often assesses changes in mood following activities, such as mindfully deep breathing.

    The information provided to users appears clear and appropriate for young adults with stress and wellness concerns. I did not have a single interaction with the app that seemed to stray from the “spirit” of CBT. However, the app does not clearly specify who should use the app and which tools or types of interactions with Woebot might be best based on what’s going on in a user’s life. Some users might therefore wonder if they are using the app in ways that will be most helpful for them, or if they might need a higher level of care. Further, as the app notes to users when they open the app for the first time: no human is monitoring the live conversations. This means that when Woebot allows users to enter “free text” (i.e., write in your own response), that the feedback that is provided might be discordant or incomplete. For example, in learning and practicing the CBT-based skill of thought challenging, I entered in a thought that I have had many patients share with me in therapy. However, since Woebot could not “read” and respond to this thought, the feedback it was giving was not a great fit with the type of challenging the thought would need. After asking me a few follow-up questions, it essentially apologized that we couldn’t figure that one out and moved onto another task.

    Ease of Use and User Experience

    The app is generally fairly easy to navigate and is primarily driven by interacting with the robot character, Woebot. This keeps a user primarily on the main screen, always with the option of interacting with Woebot in some way.

    Despite the initial screens that appear after download explaining that one simply has to type in “SOS” to receive external resources if in a crisis, this information can’t be found anywhere within the app. The app’s emergency settings can be triggered from conversations, for example use of the word “crisis” will trigger Woebot to assess if the user or someone they know is in crisis. Woebot can sometimes get in feedback loops. If it does not compute what a user puts in or an interaction gets interrupted (e.g., by taking a break to check email), it can start back over with the same questions. While understandable when interacting with a robot, some users might get frustrated by the repetition. Finally, given that the app uses videos as one means of providing information about CBT-based topics, it is surprising that there are no video/audio options for mindfulness practices. An option to “hear” these instructions would likely better promote the practice.

    Visual Design and User Interface

    The app is attractive and has a good look and feel that mimics a texting environment. Elements are displayed appropriately, and all previous information may be found by scrolling. Further, the varied response options, like the use of emojis and gifs, are engaging. I did note one typo (“So, [Name], were you doing just now?”), but for the high amount of text exchanges that occurred, one typo is pretty excusable. In clicking a specific “story” (e.g., “Are labels bad for us?”), users can revisit a previous lesson. However, each time this occurs, it is a new interaction rather than allowing a user to revisit what their previous responses were.

    Overall Impression

    Overall, I enjoyed using Woebot. I found its methods of incorporating CBT into an environment that mimics texting with a robot “friend” to be both creative and sound for young adults with stress and wellness issues. Further, the interjections of random images (e.g., a picture of a cute dog) or corny/punny jokes were engaging and funny. These interjections were also often the “hook” of push notifications, which encourage a user to open the app frequently. Woebot also incorporated large amounts of validation, empathy, and hopefulness that many users would likely appreciate. That said, those who are feeling a bit irritable might not appreciate the gentle humor and enthusiasm of Woebot. Also, those with more severe symptoms (e.g., Major Depressive Disorder) would likely require a higher level of care to help guide and reinforce these CBT skills.

    Ebert, D. D., Zarski, A. C., Christensen, H., Stikkelbroek, Y., Cuijpers, P., Berking, M., & Riper, H. (2015). Internet and computer-based cognitive behavioral therapy for anxiety and depression in youth: A meta-analysis of randomized controlled outcome trials. PloS One , 10 (3), e0119895.

    Fitzpatrick, K. K., Darcy, A., & Vierhile, M. (2017). Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): A randomized controlled trial. JMIR Mental Health , 4 (2), e19.

    Hofmann, S. G., Asnaani, A., Vonk, I. J., Sawyer, A. T., & Fang, A. (2012). The efficacy of cognitive behavioral therapy: A review of meta-analyses. Cognitive Therapy and Research , 36 (5), 427-440.

    Kroenke, K., Spitzer, R. L., & Williams, J. B. (2001). The PHQ ‐ 9: validity of a brief depression severity measure. Journal of General Internal Medicine , 16 (9), 606-613.

    National Institute of Mental Health. (2017). Depression in College Students . Retrieved from: https://www.nimh.nih.gov/health/publications/depression-and-college-students/index.shtml

    Spek, V., Cuijpers, P. I. M., Nyklíček, I., Riper, H., Keyzer, J., & Pop, V. (2007). Internet-based cognitive behaviour therapy for symptoms of depression and anxiety: A meta-analysis. Psychological Medicine , 37 (3), 319-328.

    Stay in touch with One Mind PsyberGuide

    We’ll send you the latest on app reviews, new resources, and other digital mental health information.

    Disclaimer

    One Mind PsyberGuide is a non-profit website dedicated to consumers seeking to make responsible and informed decisions about computer and device-assisted therapies for mental illnesses.

    One Mind PsyberGuide is also intended for professionals and researchers seeking to enhance their knowledge in this area. One Mind PsyberGuide is not an industry website. One Mind PsyberGuide seeks to provide accurate and reliable information about software and apps designed to treat schizophrenia, bipolar disorder, depression and anxiety disorders.

    One Mind PsyberGuide also seeks to ensure that this information is available to all, and that it is free of preference, bias, or endorsement.

    One Mind PsyberGuide is designed for informational purposes only. We do not endorse or warrant any product or service described on this website. Any information on any products and services contained on this website is provided “as is,” “as available” and “with all faults” without any representations or warranties, express or implied. Such information is intended to be used in conjunction with professional advice and is not a substitute for professional judgment, treatment or therapy.

    Although we use reasonable efforts to post information that we believe could potentially benefit our users at the time when it is added to our website, the information may be incorrect, incomplete or out-of-date, and we do not guarantee the accuracy or completeness of any information. Therefore, you should not rely on any information contained in this website. Notwithstanding anything to the contrary contained in this website or any written material, we disclaim all representations and warranties, express or implied, related to (1) the accuracy, completeness, timeliness or utility of any information contained in this website, or (2) the safety or effectiveness of any products or services mentioned in this website. Before using any of the information made available through this website, you should verify the accuracy and applicability of such information for your purposes.

    Please read our terms and conditions carefully before accessing or using this website for any purposes.


    Research and Studies

    Over the past three decades, researchers have increasingly tested the Attention Restoration Theory and experimented with its boundaries. Some of the most important findings from these studies have been found in three separate areas:

    A few of the studies and their findings are discussed below.

    Mental Fatigue and Attention Restoration Theory

    Studies based on the Attention Restoration Theory have found some good evidence to back up ART’s proposal about nature restoring attention.

    An early study by Hartig, Mang, and Evans (1991) compared two groups of vacationers and a control group on performance in a task that requires high directed attention. One vacation group vacationed in an urban area, while the other vacationed in a wilderness area. All groups were tested twice, once before the vacation (or at the beginning of the study period for the control group) and once after (or at the end of the study period).

    Hartig and colleagues found that those who spent their vacation in a wilderness area performed better on the task than they had pre-vacation, while the other two groups actually performed worse than before. This provided solid initial evidence that ART had some merit as a theory of attention restoration.

    Next, Hartig and colleagues tested participants in three groups:

    1. The natural environment group, which completed attentionally fatiguing tasks then walked for 40 minutes in a natural environment.
    2. The urban environment group, which completed attentionally fatiguing tasks then walked for 40 minutes in an urban environment.
    3. The passive relaxation group, which completed attentionally fatiguing tasks then relaxed for 40 minutes while listening to soft music and reading magazines.

    Again, those in the natural environment outperformed those in the other two groups. They also reported the highest “restorativeness” score based on self-report measures of the four key components of restorative environments (Hartig, 1991).

    More recent work on the subject comes from Rita Berto, who induced mental fatigue in participants through a sustained attention test, then exposed them to photographs of restorative environments, non-restorative environments, or geometric patterns (2005). Once participants had viewed the photos, they completed the sustained attention test once again. Those who were exposed to restorative photos improved their performance on the task, whether they viewed the photos for a set period of time or were self-paced, while those in the other groups did not.

    Finally, researchers Carolyn M. Tennessen and Bernardine Cimprich (1995) investigated whether simply having a better view of nature can improve one’s attention and boost restoration. They compared university students’ performance on tests of directed attention based on the degree of nature in the view from their dormitory window. Those who were able to view more nature outside their window performed better on the battery of tests than those without, providing further support for the theory.

    These studies showed that ART had promise for explaining how spending time in nature can help us restore our attention, especially after depleting that attention. However, the evidence doesn’t stop there.

    Using Attention Restoration Theory in Stress Recovery

    There is also evidence that ART is correct in its proposal that restorative natural environments can aid in recovering from stress.

    In a study involving participants in what is arguably the most stressful time of their lives, researcher Bernardine Cimprich found that recovering breast cancer patients who spent time in natural, restorative environments showed improved performance in attention-related tasks, higher likelihood to go back to work and to return to working full time, greater inclination towards starting new projects, and higher gains in quality of life (1993).

    Even if time spent in natural environments is not a specially planned outing or conscious effort on the part of the individual, it can still benefit them.

    Further evidence comes from a more recent study by researchers van den Berg, Maas, Verheij, and Groenewegen (2010). Their study showed that just having some green space around one’s home can help protect people from the negative health impacts of stress and particularly stressful life events. Those with a high amount of green space around their home were less affected by a stressful life event and reported greater perceived mental health than those with little or no green space nearby.

    For more evidence on how ART gets it right when it comes to stress recovery, see Rita Berto’s article “The Role of Nature in Coping with Psycho-Physiological Stress: A Literature Review on Restorativeness”(2014). This info-packed piece lays out a ton of evidence on how nature influences and restores us.

    ADHD and Attention Restoration Theory

    While work on the connection between natural, restorative environments and Attention-Deficit Hyperactivity Disorder (ADHD) is much younger than that on stress recovery and mental fatigue, there is some evidence that the Attention Restoration Theory can apply to those struggling with ADHD as well.

    A study from 2011 compared the behavioral, emotional, and cognitive functioning of children with ADHD during visits to two different areas: a natural, wooded area and a built, small town area. The children performed better on a concentration task when visiting the wooded area than when visiting the town, even though the town visit happened after the wooded area visit. In addition to the concentration task findings, the children generally reported more positive emotions and less behavioral problems in the wooded area than in the small town area (van den Berg & van den Berg, 2011).

    Another study by researcher Laura Thal (2014) explored the effects of 20-minute walks in either a natural area or an urban area on measures of cognitive performance and symptoms of ADHD. In line with ART, those who completed the nature walk reported improved cognitive performance and reduce ADHD symptoms. Further, their results on one of the cognitive performance measures were significantly higher than those who completed the urban walk. In addition to these improvements, those who walked in nature reported that their walk was more restorative than those who had walked in an urban environment.

    The results from these studies and others like them suggest that simply spending a little more time in nature can ease the symptoms of ADHD for children and young adults who suffer from it. It may not replace medication (or perhaps it could?) but it certainly won’t hurt!


    Metaphors in HCI

    Metaphors are widely used in HCI as a vehicle for representing and developing designs. A metaphor is a mapping process from a familiar object to an unfamiliar object, and it provides the framework to familiarize an unknown concept through a mapping process. The role of a metaphor in HCI is to facilitate developing, maintaining, and learning the conceptual foundation of the interactive design as well as orienting the user with it (Saffer, 2005). Using metaphors involves the exploration and expression of an idea that is integral to design generation and innovation (Brady, 2003). In this perspective, metaphors can be used as a tool in the design process to understand new topic areas or as a means to create new ideas about familiar subjects. They enhance our perception by transforming our sense of reality (Ricoeur, 1991), and new metaphors can create a comprehensive conceptual system (Lakoff, 1993). Metaphors can also assist in engaging the designers’ existing mental models. A mental model is an organized collection of data that acts as a representation of a thought process (Marcus, 1992, 1995). Mental models refer to analogs of real-world processes, including some other aspects of real-world processes (Gentner and Gentner, 2014). Analogical reasoning, an inference method in design cognition (Gero and Maher, 1991), is a method for developing designs that can lead to unexpected discoveries. In conceptualizing a new interactive system, a metaphor can be a useful tool for establishing a common mental model for designers. We claim that the positive impact of metaphors in HCI can be beneficial in conceptualizing smart environment. In this paper, we describe how metaphorical design enables the conceptualizing of a smart environment design in which different metaphors lead to new conceptual spaces.

    The most well-known metaphor in HCI is the desktop metaphor, which represents the user interface in a way that is similar to interactions concerning certain objects, tasks, and behaviors encountered in physical office environments. Despite the desktop metaphor still being predominant in the personal computing environment, it shows problems and limitations (Moran and Zhai, 2007 Antle et al., 2009 Houben, 2011 Jung et al., 2017) in being adapted into recent interaction designs (e.g. tangible interaction, speech interaction, and mixed reality) since it focuses on the personal computing and visual interface design. While much of the research done on metaphors in user interface design has been focused upon the use of metaphors in the design of visual communication elements of the graphical user interface (GUI) and in understanding users’ mental models of such interfaces (Voida et al., 2008 Antle et al., 2009 Houben, 2011), some researchers have pointed out the limitations of the desktop metaphor and proposed alternative metaphors (Abowd and Mynatt, 2000 Moran and Zhai, 2007 Antle et al., 2009 Jung et al., 2017). They showed several dimensions (e.g. context, modality, materiality, and affordance) of alternative metaphors as a systematic strategy to emphasize a new interactive form which can be conceptualized with metaphorical mapping. A smart environment provides potential design spaces that are yet to be fully explored and understood. We posit that new forms of smart environment can be characterized by comprehensive metaphors that can uncover potential design spaces for a smart environment by providing a common mental model.


    Acknowledgements

    We thank the host communities with whom we have worked for their patience, collaboration and the knowledge that they have shared. We also thank Claudia Jacobi and the staff at MPI-EVA in Leipzig for their work in hosting the workshop, and Shani Msafiri Mangola, Elspeth Ready, Tim Caro and Daniel Benyshek for helpful feedback on earlier drafts of this manuscript. T.B. also thanks the Coady International Institute, particularly Allison Mathie and Gord Cunningham for hosting, teaching and supporting her transition to participant-engaged research.


    Metaphors in HCI

    Metaphors are widely used in HCI as a vehicle for representing and developing designs. A metaphor is a mapping process from a familiar object to an unfamiliar object, and it provides the framework to familiarize an unknown concept through a mapping process. The role of a metaphor in HCI is to facilitate developing, maintaining, and learning the conceptual foundation of the interactive design as well as orienting the user with it (Saffer, 2005). Using metaphors involves the exploration and expression of an idea that is integral to design generation and innovation (Brady, 2003). In this perspective, metaphors can be used as a tool in the design process to understand new topic areas or as a means to create new ideas about familiar subjects. They enhance our perception by transforming our sense of reality (Ricoeur, 1991), and new metaphors can create a comprehensive conceptual system (Lakoff, 1993). Metaphors can also assist in engaging the designers’ existing mental models. A mental model is an organized collection of data that acts as a representation of a thought process (Marcus, 1992, 1995). Mental models refer to analogs of real-world processes, including some other aspects of real-world processes (Gentner and Gentner, 2014). Analogical reasoning, an inference method in design cognition (Gero and Maher, 1991), is a method for developing designs that can lead to unexpected discoveries. In conceptualizing a new interactive system, a metaphor can be a useful tool for establishing a common mental model for designers. We claim that the positive impact of metaphors in HCI can be beneficial in conceptualizing smart environment. In this paper, we describe how metaphorical design enables the conceptualizing of a smart environment design in which different metaphors lead to new conceptual spaces.

    The most well-known metaphor in HCI is the desktop metaphor, which represents the user interface in a way that is similar to interactions concerning certain objects, tasks, and behaviors encountered in physical office environments. Despite the desktop metaphor still being predominant in the personal computing environment, it shows problems and limitations (Moran and Zhai, 2007 Antle et al., 2009 Houben, 2011 Jung et al., 2017) in being adapted into recent interaction designs (e.g. tangible interaction, speech interaction, and mixed reality) since it focuses on the personal computing and visual interface design. While much of the research done on metaphors in user interface design has been focused upon the use of metaphors in the design of visual communication elements of the graphical user interface (GUI) and in understanding users’ mental models of such interfaces (Voida et al., 2008 Antle et al., 2009 Houben, 2011), some researchers have pointed out the limitations of the desktop metaphor and proposed alternative metaphors (Abowd and Mynatt, 2000 Moran and Zhai, 2007 Antle et al., 2009 Jung et al., 2017). They showed several dimensions (e.g. context, modality, materiality, and affordance) of alternative metaphors as a systematic strategy to emphasize a new interactive form which can be conceptualized with metaphorical mapping. A smart environment provides potential design spaces that are yet to be fully explored and understood. We posit that new forms of smart environment can be characterized by comprehensive metaphors that can uncover potential design spaces for a smart environment by providing a common mental model.


    Research and Studies

    Over the past three decades, researchers have increasingly tested the Attention Restoration Theory and experimented with its boundaries. Some of the most important findings from these studies have been found in three separate areas:

    A few of the studies and their findings are discussed below.

    Mental Fatigue and Attention Restoration Theory

    Studies based on the Attention Restoration Theory have found some good evidence to back up ART’s proposal about nature restoring attention.

    An early study by Hartig, Mang, and Evans (1991) compared two groups of vacationers and a control group on performance in a task that requires high directed attention. One vacation group vacationed in an urban area, while the other vacationed in a wilderness area. All groups were tested twice, once before the vacation (or at the beginning of the study period for the control group) and once after (or at the end of the study period).

    Hartig and colleagues found that those who spent their vacation in a wilderness area performed better on the task than they had pre-vacation, while the other two groups actually performed worse than before. This provided solid initial evidence that ART had some merit as a theory of attention restoration.

    Next, Hartig and colleagues tested participants in three groups:

    1. The natural environment group, which completed attentionally fatiguing tasks then walked for 40 minutes in a natural environment.
    2. The urban environment group, which completed attentionally fatiguing tasks then walked for 40 minutes in an urban environment.
    3. The passive relaxation group, which completed attentionally fatiguing tasks then relaxed for 40 minutes while listening to soft music and reading magazines.

    Again, those in the natural environment outperformed those in the other two groups. They also reported the highest “restorativeness” score based on self-report measures of the four key components of restorative environments (Hartig, 1991).

    More recent work on the subject comes from Rita Berto, who induced mental fatigue in participants through a sustained attention test, then exposed them to photographs of restorative environments, non-restorative environments, or geometric patterns (2005). Once participants had viewed the photos, they completed the sustained attention test once again. Those who were exposed to restorative photos improved their performance on the task, whether they viewed the photos for a set period of time or were self-paced, while those in the other groups did not.

    Finally, researchers Carolyn M. Tennessen and Bernardine Cimprich (1995) investigated whether simply having a better view of nature can improve one’s attention and boost restoration. They compared university students’ performance on tests of directed attention based on the degree of nature in the view from their dormitory window. Those who were able to view more nature outside their window performed better on the battery of tests than those without, providing further support for the theory.

    These studies showed that ART had promise for explaining how spending time in nature can help us restore our attention, especially after depleting that attention. However, the evidence doesn’t stop there.

    Using Attention Restoration Theory in Stress Recovery

    There is also evidence that ART is correct in its proposal that restorative natural environments can aid in recovering from stress.

    In a study involving participants in what is arguably the most stressful time of their lives, researcher Bernardine Cimprich found that recovering breast cancer patients who spent time in natural, restorative environments showed improved performance in attention-related tasks, higher likelihood to go back to work and to return to working full time, greater inclination towards starting new projects, and higher gains in quality of life (1993).

    Even if time spent in natural environments is not a specially planned outing or conscious effort on the part of the individual, it can still benefit them.

    Further evidence comes from a more recent study by researchers van den Berg, Maas, Verheij, and Groenewegen (2010). Their study showed that just having some green space around one’s home can help protect people from the negative health impacts of stress and particularly stressful life events. Those with a high amount of green space around their home were less affected by a stressful life event and reported greater perceived mental health than those with little or no green space nearby.

    For more evidence on how ART gets it right when it comes to stress recovery, see Rita Berto’s article “The Role of Nature in Coping with Psycho-Physiological Stress: A Literature Review on Restorativeness”(2014). This info-packed piece lays out a ton of evidence on how nature influences and restores us.

    ADHD and Attention Restoration Theory

    While work on the connection between natural, restorative environments and Attention-Deficit Hyperactivity Disorder (ADHD) is much younger than that on stress recovery and mental fatigue, there is some evidence that the Attention Restoration Theory can apply to those struggling with ADHD as well.

    A study from 2011 compared the behavioral, emotional, and cognitive functioning of children with ADHD during visits to two different areas: a natural, wooded area and a built, small town area. The children performed better on a concentration task when visiting the wooded area than when visiting the town, even though the town visit happened after the wooded area visit. In addition to the concentration task findings, the children generally reported more positive emotions and less behavioral problems in the wooded area than in the small town area (van den Berg & van den Berg, 2011).

    Another study by researcher Laura Thal (2014) explored the effects of 20-minute walks in either a natural area or an urban area on measures of cognitive performance and symptoms of ADHD. In line with ART, those who completed the nature walk reported improved cognitive performance and reduce ADHD symptoms. Further, their results on one of the cognitive performance measures were significantly higher than those who completed the urban walk. In addition to these improvements, those who walked in nature reported that their walk was more restorative than those who had walked in an urban environment.

    The results from these studies and others like them suggest that simply spending a little more time in nature can ease the symptoms of ADHD for children and young adults who suffer from it. It may not replace medication (or perhaps it could?) but it certainly won’t hurt!


    Acknowledgements

    We thank the host communities with whom we have worked for their patience, collaboration and the knowledge that they have shared. We also thank Claudia Jacobi and the staff at MPI-EVA in Leipzig for their work in hosting the workshop, and Shani Msafiri Mangola, Elspeth Ready, Tim Caro and Daniel Benyshek for helpful feedback on earlier drafts of this manuscript. T.B. also thanks the Coady International Institute, particularly Allison Mathie and Gord Cunningham for hosting, teaching and supporting her transition to participant-engaged research.


    Woebot: A Professional Review

    Reading Time: 6 minutes Colleen Stiles-Shields, PhD, is a licensed clinical psychologist and Assistant Professor in the Department of Psychiatry and Behavioral Sciences at Rush University Medical Center. Prior to her training in Clinical Psychology, Dr. Stiles-Shields was trained in Social Work and was a Licensed Clinical Social Worker (LCSW). Dr. Stiles-Shields’ work focuses on using technology as a delivery mechanism for behavioral health interventions, particularly for pediatric and underserved populations. She has received funding to develop and evaluate mobile apps for depression and anxiety from the National Institute of Mental Health and published over 40 scientific papers. She completed her PhD at Northwestern University’s Feinberg School of Medicine as a member of the Center for Behavioral Intervention Technologies (CBITs).

    Product Description

    Woebot is a fully automated conversational agent (i.e., think along the lines of Siri [Apple] or Alexa [Amazon] interacting with someone via text). Marketed as “your friendly self-care expert,” Woebot is accessible via mobile devices through an instant messenger app (iOS or Android computer access appears to only be available with membership). The app features lessons (via texts and “stories”), interactive exercises, and videos grounded in Cognitive Behavioral Therapy (CBT). Interactions with Woebot are designed to be brief (1-2 minutes). Longer interactions are typically driven by users opting for more information (e.g., selecting a response type that elicits more interactions) or following a prompt from Woebot that a selected activity may take some time (e.g., “It’ll take 10 minutes, is that ok?”). With its use of typical media seen in a texting environment (e.g., brief messages, gifs, emojis, acronyms), those who frequently communicate this way will navigate this app well, while also engaging in evidence-based skills and strategies for stress and wellness.

    Recommendations for Use

    Woebot is geared towards young adults experiencing stress and wellness difficulties (like trouble sleeping). Indeed, middle aged and older adults might not follow all of the language Woebot uses or provides as possible user responses. For example, if you would get lost after being asked about your “boo,” this probably isn’t a great fit for you. The app is rated “T” for Teen by the Google Play Store, and does seem to have appropriate content for adolescents. However, the empirical support for Woebot stems from a randomized controlled trial which included young adults only (Fitzpatrick, Darcy & Vierhile, 2017). The app tries to engage a user each day, with push notifications ranging from: encouragement for previously completing a task, to checking in about mood, to making a joke (e.g., “I flossed my grills today…”). Nearly daily use is likely a good goal to aim for, particularly at the start of using Woebot. This goal is easy to achieve, as brief check ins and interactions are simple. Those dealing with mild stresses or feeling a little blue are likely to benefit from using Woebot on their own. Those with more serious concerns (like having a mental health clinician tell you that you have Depression or Anxiety) might benefit from using Woebot while also seeing a therapist, or after learning and practicing some of these skills with a therapist first (sort of like using the app to get a tune up or a way to keep practicing what you have learned). As Woebot points out, it acts like a robot and cannot always accurately respond to what it is told. For some skills, like thought challenging (a cognitive skill that is part of the “C” of CBT), Woebot cannot always catch errors in completing this skill and a person could get confused on how to do this correctly without the help of a human (e.g., therapist).

    Content

    The content of the app reflects the core skills of Cognitive Behavioral Therapy (CBT), which has a strong evidence base for a variety conditions when delivered face-to-face ( Hofmann, Asnaani, Vonk, Sawyer, & Fang, 2012) and via technological platforms (e.g., a computer or mobile device Ebert et al., 2015 Spek, Cuijpers, Nyklíček, Riper, Keyzer, & Pop, 2007). The app also includes practicing skills around gratitude and mindfulness. The app was evaluated in a randomized controlled trial in comparison to an “informational control condition” (i.e., those randomized to this condition were given access to an eBook with evidence-based information about depression for college students National Institute of Mental Health, 2017). The young adults in the active arm of the trial used Woebot nearly every day for two weeks, and as a group, had lower depression scores after these two weeks (Fitzpatrick et al., 2017). Woebot even shares these findings in early interactions with a user: “But my data from my Stanford study shows that most people start to get the hang of things and feel better at 2 weeks.” Of note from this research: the participants were young adults (no one under 18 years of age) and were fairly homogenous from a demographic standpoint (Fitzpatrick et al., 2017). How well these findings might translate to younger and older users (like teens or middle-aged adults) as well as minority and/or special needs groups (e.g., users with cognitive impairments or English as a second language) is unclear.

    Information is presented in varied formats, including: brief text (i.e., “stories” delivered as if being sent via brief text messages) and videos (e.g., Carole Dweck presenting the idea of “Mindsets” in a ten-minute video or a two-minute “What is Mindfulness” video). The information is incredibly concise, especially compared to how long therapeutic explanations of CBT concepts can get. Further, a user can often decide if they want more explanation (e.g., selecting “Got it” vs. “Tell me more” when presented with new information). Also of note, the majority of interactions with the app are layered in empathetic and validating statements. The app also utilizes the user’s name frequently (making statements feel more personal) and often “forces” users to offer praise to themselves after completing tasks on the app. For example, in response to the app sending the message: “Magic! You’re doing great,” the only response option the app may provide is “I sure am!”

    Woebot also includes frequent mood tracking. This mood tracking is varied, including: 1) occasional administration of Patient Health Questionnaire items (i.e., a set of questions measuring depressive symptoms that people are likely to answer in their doctor’s office Kroenke, Spitzer, & Williams, 2001) 2) asking for quantitative ratings around anxiety (e.g., rate your anxiety from 0 to 5) and 3) selecting emotions to characterize how one is feeling on a given day (e.g., happy, sad, content). This last option is graphed for users to see “your mood over time.” However, when viewing mood over time, moods are represented on a graph in a way that implies that certain moods have different quantitative values. For example “tired” is represented on the graph as being at a lower level (i.e. “worse”) than “okay”, which may not match how a user would rate their mood. Finally and consistent with CBT practice, the app also often assesses changes in mood following activities, such as mindfully deep breathing.

    The information provided to users appears clear and appropriate for young adults with stress and wellness concerns. I did not have a single interaction with the app that seemed to stray from the “spirit” of CBT. However, the app does not clearly specify who should use the app and which tools or types of interactions with Woebot might be best based on what’s going on in a user’s life. Some users might therefore wonder if they are using the app in ways that will be most helpful for them, or if they might need a higher level of care. Further, as the app notes to users when they open the app for the first time: no human is monitoring the live conversations. This means that when Woebot allows users to enter “free text” (i.e., write in your own response), that the feedback that is provided might be discordant or incomplete. For example, in learning and practicing the CBT-based skill of thought challenging, I entered in a thought that I have had many patients share with me in therapy. However, since Woebot could not “read” and respond to this thought, the feedback it was giving was not a great fit with the type of challenging the thought would need. After asking me a few follow-up questions, it essentially apologized that we couldn’t figure that one out and moved onto another task.

    Ease of Use and User Experience

    The app is generally fairly easy to navigate and is primarily driven by interacting with the robot character, Woebot. This keeps a user primarily on the main screen, always with the option of interacting with Woebot in some way.

    Despite the initial screens that appear after download explaining that one simply has to type in “SOS” to receive external resources if in a crisis, this information can’t be found anywhere within the app. The app’s emergency settings can be triggered from conversations, for example use of the word “crisis” will trigger Woebot to assess if the user or someone they know is in crisis. Woebot can sometimes get in feedback loops. If it does not compute what a user puts in or an interaction gets interrupted (e.g., by taking a break to check email), it can start back over with the same questions. While understandable when interacting with a robot, some users might get frustrated by the repetition. Finally, given that the app uses videos as one means of providing information about CBT-based topics, it is surprising that there are no video/audio options for mindfulness practices. An option to “hear” these instructions would likely better promote the practice.

    Visual Design and User Interface

    The app is attractive and has a good look and feel that mimics a texting environment. Elements are displayed appropriately, and all previous information may be found by scrolling. Further, the varied response options, like the use of emojis and gifs, are engaging. I did note one typo (“So, [Name], were you doing just now?”), but for the high amount of text exchanges that occurred, one typo is pretty excusable. In clicking a specific “story” (e.g., “Are labels bad for us?”), users can revisit a previous lesson. However, each time this occurs, it is a new interaction rather than allowing a user to revisit what their previous responses were.

    Overall Impression

    Overall, I enjoyed using Woebot. I found its methods of incorporating CBT into an environment that mimics texting with a robot “friend” to be both creative and sound for young adults with stress and wellness issues. Further, the interjections of random images (e.g., a picture of a cute dog) or corny/punny jokes were engaging and funny. These interjections were also often the “hook” of push notifications, which encourage a user to open the app frequently. Woebot also incorporated large amounts of validation, empathy, and hopefulness that many users would likely appreciate. That said, those who are feeling a bit irritable might not appreciate the gentle humor and enthusiasm of Woebot. Also, those with more severe symptoms (e.g., Major Depressive Disorder) would likely require a higher level of care to help guide and reinforce these CBT skills.

    Ebert, D. D., Zarski, A. C., Christensen, H., Stikkelbroek, Y., Cuijpers, P., Berking, M., & Riper, H. (2015). Internet and computer-based cognitive behavioral therapy for anxiety and depression in youth: A meta-analysis of randomized controlled outcome trials. PloS One , 10 (3), e0119895.

    Fitzpatrick, K. K., Darcy, A., & Vierhile, M. (2017). Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): A randomized controlled trial. JMIR Mental Health , 4 (2), e19.

    Hofmann, S. G., Asnaani, A., Vonk, I. J., Sawyer, A. T., & Fang, A. (2012). The efficacy of cognitive behavioral therapy: A review of meta-analyses. Cognitive Therapy and Research , 36 (5), 427-440.

    Kroenke, K., Spitzer, R. L., & Williams, J. B. (2001). The PHQ ‐ 9: validity of a brief depression severity measure. Journal of General Internal Medicine , 16 (9), 606-613.

    National Institute of Mental Health. (2017). Depression in College Students . Retrieved from: https://www.nimh.nih.gov/health/publications/depression-and-college-students/index.shtml

    Spek, V., Cuijpers, P. I. M., Nyklíček, I., Riper, H., Keyzer, J., & Pop, V. (2007). Internet-based cognitive behaviour therapy for symptoms of depression and anxiety: A meta-analysis. Psychological Medicine , 37 (3), 319-328.

    Stay in touch with One Mind PsyberGuide

    We’ll send you the latest on app reviews, new resources, and other digital mental health information.

    Disclaimer

    One Mind PsyberGuide is a non-profit website dedicated to consumers seeking to make responsible and informed decisions about computer and device-assisted therapies for mental illnesses.

    One Mind PsyberGuide is also intended for professionals and researchers seeking to enhance their knowledge in this area. One Mind PsyberGuide is not an industry website. One Mind PsyberGuide seeks to provide accurate and reliable information about software and apps designed to treat schizophrenia, bipolar disorder, depression and anxiety disorders.

    One Mind PsyberGuide also seeks to ensure that this information is available to all, and that it is free of preference, bias, or endorsement.

    One Mind PsyberGuide is designed for informational purposes only. We do not endorse or warrant any product or service described on this website. Any information on any products and services contained on this website is provided “as is,” “as available” and “with all faults” without any representations or warranties, express or implied. Such information is intended to be used in conjunction with professional advice and is not a substitute for professional judgment, treatment or therapy.

    Although we use reasonable efforts to post information that we believe could potentially benefit our users at the time when it is added to our website, the information may be incorrect, incomplete or out-of-date, and we do not guarantee the accuracy or completeness of any information. Therefore, you should not rely on any information contained in this website. Notwithstanding anything to the contrary contained in this website or any written material, we disclaim all representations and warranties, express or implied, related to (1) the accuracy, completeness, timeliness or utility of any information contained in this website, or (2) the safety or effectiveness of any products or services mentioned in this website. Before using any of the information made available through this website, you should verify the accuracy and applicability of such information for your purposes.

    Please read our terms and conditions carefully before accessing or using this website for any purposes.


    Tools that help people learn

    I aspire to build systems that make it possible to deploy effective pedagogical interventions at scale (e.g., Learnersourcing project) or in contexts where such interventions would be difficult to apply without help from technology (e.g., PETALS project).

    Projects

    TELLab: Experimentation @ Scale to Support Experiential Learning in Social Sciences and Design

    Well-conducted lecture demonstrations of natural phenomena improve students' engagement, learning and retention of knowledge. Similarly, laboratory modules that allow for genuine exploration and discovery of relevant concepts can improve learning outcomes. These pedagogical techniques are used frequently in natural sciences and engineering to teach students about phenomena in the physical world. But how might we conduct a lecture demonstration to demonstrate impact of extraneous cognitive load on performance? How might we design a lab, in which students explore how adding decorations to visualizations impacts the comprehension and memorability of visualizations? We are developing tools, content and procedures to bring experiential learning techniques to social science and design-related courses that teach concepts related to human perception, cognition and behavior. Specifically, we are working to develop software technologies to enable rapid, large-scale and ethical online human-subjects experimentation in undergraduate design-related courses. See the project web site for more.

    Na Li, Krzysztof Z. Gajos, Ken Nakayama, and Ryan Enos. TELLab: An Experiential Learning Tool for Psychology. In Proceedings of the Second (2015) ACM Conference on Learning @ Scale , [email protected] '15, pages 293&ndash297, New York, NY, USA, 2015. ACM.
    [Abstract, BibTeX, etc.]

    Organic Peer Assessment

    We are developing tools and techniques for organic peer assessment, an approach where assessment occurs as a side effect of students performing activities, which they find intrinsically motivating. Our preliminary results, obtained in the context of a flipped classroom, show that the quality of the summative assessment produced by the peers matched that of experts, and we encountered strong evidence that our peer assessment implementation had positive effects on achievement.

    Steven Komarov and Krzysztof Z. Gajos. Organic Peer Assessment. In Proceedings of the CHI 2014 Learning Innovation at Scale workshop , 2014.
    [Abstract, BibTeX, etc.]

    Learnersourcing: Leveraging Crowds of Learners to Improve the Experience of Learning from Videos

    Rich knowledge about the content of educational videos can be used to enable more effective and more enjoyable learning experiences. We are developing tools that leverage crowds of learners to collect rich meta data about educational videos as a byproduct of the learners' natural interactions with the videos. We are also developing tools and techniques that use these meta data to improve the learning experience for others.

    Sarah Weir, Juho Kim, Krzysztof Z. Gajos, and Robert C. Miller. Learnersourcing Subgoal Labels for How-to Videos. In Proceedings of CSCW'15 , 2015.
    [Abstract, BibTeX, etc.]

    Juho Kim, Philip J. Guo, Carrie J. Cai, Shang-Wen (Daniel) Li, Krzysztof Z. Gajos, and Robert C. Miller. Data-Driven Interaction Techniques for Improving Navigation of Educational Videos. In Proceedings of UIST'14 , 2014. To appear.
    [Abstract, BibTeX, Video, etc.]

    Juho Kim, Phu Nguyen, Sarah Weir, Philip J Guo, Robert C Miller, and Krzysztof Z. Gajos. Crowdsourcing Step-by-Step Information Extraction to Enhance Existing How-to Videos. In Proceedings of CHI 2014 , 2014. To appear. Honorable Mention
    [Abstract, BibTeX, etc.]

    Juho Kim, Shang-Wen (Daniel) Li, Carrie J. Cai, Krzysztof Z. Gajos, and Robert C. Miller. Leveraging Video Interaction Data and Content Analysis to Improve Video Learning. In Proceedings of the CHI 2014 Learning Innovation at Scale workshop , 2014.
    [Abstract, BibTeX, etc.]

    Juho Kim, Philip J. Guo, Daniel T. Seaton, Piotr Mitros, Krzysztof Z. Gajos, and Robert C. Miller. Understanding In-Video Dropouts and Interaction Peaks in Online Lecture Videos. In Proceeding of Learning at Scale 2014 , 2014. To appear.
    [Abstract, BibTeX, etc.]

    Juho Kim, Robert C. Miller, and Krzysztof Z. Gajos. Learnersourcing subgoal labeling to support learning from how-to videos. In CHI '13 Extended Abstracts on Human Factors in Computing Systems , CHI EA '13, pages 685-690, New York, NY, USA, 2013. ACM.
    [Abstract, BibTeX, etc.]

    Ingenium: Improving Engagement and Accuracy with the Visualization of Latin for Language Learning

    Learners commonly make errors in reading Latin, because they do not fully understand the impact of Latin's grammatical structure--its morphology and syntax--on a sentence's meaning. Synthesizing instructional methods used for Latin and artificial programming languages, Ingenium visualizes the logical structure of grammar by making each word into a puzzle block, whose shape and color reflect the word's morphological forms and roles. See the video to see how it works.

    Sharon Zhou, Ivy J. Livingston, Mark Schiefsky, Stuart M. Shieber, and Krzysztof Z. Gajos. Ingenium: Engaging Novice Students with Latin Grammar. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems , CHI '16, pages 944-956, New York, NY, USA, 2016. ACM.
    [Abstract, BibTeX, Video, etc.]

    PETALS Project -- A Visual Decision Support Tool For Landmine Detection

    We have built an interactive visualization system, callded PETALS, that helps novice deminers learn how to correctly identify dangerous cluster configurations of landmines. We conducted a controlled study with two experienced instructors from the Humanitarian Demining Training Center (HDTC) in Fort Leonard Wood, Missouri and 58 participants, who were put through the basic land mine detection course. Half of the participants had access to PETALS during training and half did not. During the final exam, which all participants completed without PETALS, participants who used PETALS during training were 72% less likely to make a mistake on the cluster tasks. These results are not yet published, but the available papers capture the initial development and evaluation of the PETALS system.

    Lahiru Jayatilaka, David M. Sengeh, Charles Herrmann, Luca Bertuccelli, Dimitrios Antos, Barbara J. Grosz, and Krzysztof Z. Gajos. PETALS: Improving Learning of Expert Skill in Humanitarian Demining. In Proc. COMPASS '18: ACM SIGCAS Conference on Computing and Sustainable Societies , 2018. To appear.
    [Abstract, BibTeX, etc.]

    Lahiru G. Jayatilaka, Luca F. Bertuccelli, James Staszewski, and Krzysztof Z. Gajos. Evaluating a Pattern-Based Visual Support Approach for Humanitarian Landmine Clearance. In CHI '11: Proceeding of the annual SIGCHI conference on Human factors in computing systems , New York, NY, USA, 2011. ACM.
    [Abstract, BibTeX, Authorizer, etc.]

    Lahiru G. Jayatilaka, Luca F. Bertuccelli, James Staszewski, and Krzysztof Z. Gajos. PETALS: a visual interface for landmine detection. In Adjunct proceedings of the 23nd annual ACM symposium on User interface software and technology , UIST '10, pages 427-428, New York, NY, USA, 2010. ACM.
    [Abstract, BibTeX, Authorizer, etc.]


    Interactions

    The senses we call upon when interacting with technology are restricted. We mostly rely on vision and hearing, and increasingly touch, but taste and smell remain largely unused. Although our knowledge about sensory systems and devices has grown rapidly over the past few decades, there is still an unmet challenge in understanding people's multisensory experiences in HCI. The goal is that by understanding the ways in which our senses process information and how they relate to one another, it will be possible to create richer experiences for human-technology interactions.

    To meet this challenge, we need specific actions within the HCI community. First, we must determine which tactile, gustatory, and olfactory experiences we can design for, and how to meaningfully stimulate them in technology interactions. Second, we need to build on previous frameworks for multisensory design while also creating new ones. Third, we need to design interfaces that allow the stimulation of unexplored sensory inputs (e.g., digital smell), as well as interfaces that take into account the relationships between the senses (e.g., integration of taste and smell into flavor). Finally, it is vital to understand what limitations come into play when users need to monitor information from more than one sense simultaneously.

    Thinking Beyond Audiovisual Interfaces

    Though much development is needed, in recent years we have witnessed progress in multisensory experiences involving touch. It is key for HCI to leverage the full range of tactile sensations (vibrations, pressure, force, balance, heat, coolness/wetness, electric shocks, pain and itch, etc.), taking into account the active and passive modes of touch and its integration with the other senses. This will undoubtedly provide new tools for interactive experience design and will help to uncover the fine granularity of sensory stimulation and emotional responses.

    Moreover, both psychologists and neuroscientists have advanced the field of multisensory perception over recent decades. For example, they have provided crucial insights on the multisensory interactions that give rise to the psychological "flavor sense" [1]. The development of taste and smell interfaces, and subsequently flavor interfaces, is still in its infancy much work will be required to create multisensory-based systems that are both meaningful to people and scalable. Nevertheless, technology is advancing rapidly, including some one-off designs such as LOLLio [2], MetaCookie+ [3], and Tongue Mounted Digital Taste Interface/Taste+ [4] (Figure 1).

    Taste+ is an example of how multisensory interaction could improve dining experiences (which, by definition, are multisensorial [1]). The user can augment the flavors of food and beverages by applying weak and controlled electrical pulses on their tongue using electronically enhanced everyday utensils such as spoons and beverage bottles. The initial experimental results show that users perceive virtual salty and sour sensations.

    Moving Toward the Chemical Senses

    Here we want to highlight that there are opportunities to enhance designers' and developers' abilities to create meaningful interactions and make use of the whole spectrum of sensory experiences. However, there are still many challenges when studying taste and particularly smell, especially related to inter-subject variability, varying olfactory preferences over time, and cross-sensory influences. No other sensory modality makes as direct and intense contact with the neural substrates of emotion and memory, which may explain why smell-evoked memories are often emotionally potent.

    Smell and taste are known as the chemical senses because they rely on chemical transduction. We do not yet know entirely how to digitize these senses in the HCI context compared with others like sound and light, where we can measure frequency ranges and convert them into a digital medium (bits).

    As a community, we need to explore and develop design methods and frameworks that provide both quantitative and qualitative parameters for sensory stimulation. In the case of touch, the process is well facilitated through the proliferation of haptic technologies (from contact to contactless devices), but we are still in the early stages of development for taste and smell. However, we are now ahead of the technological development due to the rich understanding achieved by psychology and neuroscience. We thus have the opportunity to shape the development of future taste- and smell-based technologies (Figure 2) [3]. A basic understanding of how these chemical senses could be characterized from an HCI design perspective can be established.

    For instance, Obrist et al. [5] investigated the characteristics of the five basic taste experiences (sweet, salty, bitter, sour, and umami) and suggested a design framework. This framework describes the characteristics of taste experiences across all five tastes, along three themes: temporality, affective reactions, and embodiment. Particularities of each individual taste are highlighted in order to elucidate the potential design qualities of single tastes (Figure 3). For example, sweet sensations can be used to stimulate and enhance positive experiences, though on a limited timescale, as the sweetness quickly disappears, leaving one unsatisfied. It's a pleasant taste but one that is tinged with a bittersweet ending. In contrast to the sweet taste, the sour taste is described as short-lived, often coming as a surprise due to its explosive and punchy character. This taste overwhelms with its rapid appearance and rapid decay. It leaves one with the feeling that something is missing.

    How is This Information Useful for HCI?

    LOLLio, the taste-based game device, currently uses sweet and sour for positive and negative stimulation during game play. We suggest that our framework could improve such games by providing them with fine-grain insights on the specific characteristics of taste experiences that could be integrated into the game play. For example, when a person moves between related levels of a game, a continuing taste like bitter or salty is useful based on the specific characteristics of those tastes. Whereas when a user is moving to distinct levels or is performing a side challenge, an explosive taste like sour, sweet, or umami might be more suitable. The designer can adjust specific tastes in each category to create different affective reactions and a sense of agency.

    There are already a number of suggestions from the context of multisensory product design. For example, Michael Haverkamp [6] has put forward a framework for synesthetic design. The idea here is to achieve "the optimal figuration of objects based upon the systematic connections between the senses." For that purpose, Haverkamp suggests that designers need to take into account different levels of interconnections between the senses, such as the relations between (abstract) sensory features in different modalities (e.g., visual shape and taste qualities) or semantic associations (e.g., as a function of a common identity or meaning) that can for instance be exploited in a multimedia context (Figure 4).

    Directions for Future Research

    Based on multisensory experience research, it is possible to think of a variety of directions for the future. For example, the research on taste experiences presented here can be discussed with respect to their relevance for design, building on existing psychological theories on information processing (e.g., rational and intuitive thinking). The dual process theory, for instance, accounts for two styles of processing in humans: the intuition-based System 1 with associative reasoning that is fast and automatic with strong emotional bonds and reasoning based on System 2, which is slower and more volatile, being influenced by conscious judgments and attitudes. That said, the rapidity of the sour taste experience does not leave enough time for System 1 to engage with it and triggers System 2 to reflect on what just happened. Such reactions, when carefully timed, can prime users to be more rational in their thinking during a productivity task (e.g., to awaken someone who may be stuck in a loop). Moreover, an appropriately presented taste can create a synchronic experience that can lead to stronger cognitive ease (to make intuitive decisions) or reduce the cognitive ease to encourage rational thinking. Note, of course, that taste inputs will generally be utilized with other sensory inputs (e.g., visual) and thus the alignment or misalignment, or congruency, of the different inputs (in terms of processing style, emotions, identity, or so on), can result in different outcomes (positive or negative).

    Research of this kind could allow designers and developers to meaningfully harness touch, taste, and smell in HCI and open up new ways of talking about the sense of taste and related experiences. People often say things like "I like it. It is sweet," but the underlying properties of specific and often complex experiences in HCI remain silent and consequently inaccessible to designers. Therefore, having a framework that includes more fine-grain descriptions such as "it lingers" and "it is like being punched in the face," which have specific experiential correlates, can lead to the creation of a richer vocabulary for designers and can evoke interesting discussions around interaction design.

    Furthermore, it is crucial to determine the meaningful design space for multisensory interactive experiences. For example, we rarely experience the sense of taste in isolation. Perhaps, aiming for the psychological flavor sense would be a way to go, as we combine taste, olfactory, and trigeminal/oral-somatosensory inputs in our everyday life whenever we eat or drink. Here, it is key to think about congruency and its ability to produce different reactions in the user. At the same time, it is also key to understand the unique properties of each sensory modality before designing for their sensory integration in the design of interactive systems.

    Studying these underexploited senses not only enhances the design space of multisensory HCI but also helps to improve the fundamental understanding of these senses along with their cross-sensory associations.

    1. Spence, C. Multisensory flavor perception. Cell 161, 1 (2015), 24–35.

    2. Murer, M., Aslan, I., Tscheligi, M. LOLLio: Exploring taste as playful modality. Proc. of TEI 2013. 299–302.

    3. Narumi, T., Nishizaka, S., Kajinami, T., Tanikawa, T., and Hirose, M. Augmented reality flavors: Gustatory display based on edible marker and cross-modal interaction. Proc. of CHI 2011. 93–102.

    4. Ranasinghe, N., Karunanayaka, K., Cheok, A.D., Fernando, O.N.N., Nii, H., and Gopalakrishnakone, P. Digital taste and smell communication. Proc. of the 6th International Conference on Body Area Networks. ICST, 2011, 78–84.

    5. Obrist, M., Comber, R., Subramanian, S., Piqueras-Fiszman, B., Velasco, C., and Spence, C. Temporal, affective, and embodied characteristics of taste experiences: A framework for design. Proc. of CHI 2014. 2853–2862.

    6. Haverkamp, M. Synesthetic Design: Handbook for a Multi-sensory Approach. Birkhäuser Verlag, Basel, 2013.

    7. Burgess, M. We got sprayed in the face by a 9D television. Wired (May 20, 2016) http://www.wired.co.uk/article/9d-television-touch-smell-taste

    Marianna Obrist is a reader in interaction design at the University of Sussex, U.K., and head of the Sussex Computer Human Interaction (SCHI "Sky") Lab (http://www.multisensory.info/). Her research focuses on the systematic exploration of touch, taste, and smell experiences as future interaction modalities. [email protected]

    Carlos Velasco (http://carlosvelasco.co.uk/) is a member of the Crossmodal Research Laboratory, University of Oxford, U.K., and a postdoctoral research fellow at the Imagineering Institute, Iskandar, Malaysia. His research focuses on crossmodal perception and its applications. [email protected]

    Chi Thanh Vi is a postdoctoral research fellow at the SCHI Lab at the University of Sussex. He is interested in using different brain-sensing methods to understand the neural basis of user states, and the effect of taste on decision-making behavior. [email protected]

    Nimesha Ranasinghe (http://nimesha.info) is a research fellow at the National University of Singapore. His research interests include digital multisensory interactions (taste and smell), wearable computing, and HCI. During his Ph.D. studies he invented virtual taste technology. [email protected]

    Ali Israr is a senior research engineer at Disney Research, Pittsburgh, USA. He is exploring the role of haptics in multimodal and multisensory settings such as VR/AR, wearables, and handhelds, and in gaming. [email protected]

    Adrian David Cheok (http://adriancheok.info) is director of the Imagineering Institute, Iskandar, Malaysia, and a chair professor of pervasive computing at City University London. His research focuses on mixed reality, HCI, wearable computers and ubiquitous computing, fuzzy systems, embedded systems, and power electronics. [email protected]

    Charles Spence (http://www.psy.ox.ac.uk/team/charles-spence) is the head of the Crossmodal Research Laboratory, University of Oxford, U.K. His research focuses on how a better understanding of the human mind will lead to the better design of multisensory foods, products, interfaces, and environments. [email protected]

    Ponnampalam Gopalakrishnakone is professor emeritus in anatomy at the Yong Lin School of Medicine, National University of Singapore, and chairman of the Venom and Toxin Research Programme at the National University of Singapore. [email protected]

    ©2016 ACM 1072-5220/16/09 $15.00

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

    The Digital Library is published by the Association for Computing Machinery. Copyright © 2016 ACM, Inc.


    Any research on right-hand/left-hand based preferences when interacting with an interface? - Psychology

    Below is a list of all projects available for summer 2021. Please review these prior to applying in the application you will be asked to select your top three projects of interest.

    Accelerating Innovation Through Analogy Mining

    Project Description: We are looking for students for building prototype interactive systems to accelerate the rate of scientific innovations through mining analogies. Students will work with a graduate student mentor and a faculty advisor at HCII to gain hands-on experience in building interactive visualizations and developing and applying cutting edge natural language processing models. As a member of this project, you will play a key role in defining, designing, developing, and evaluating interactive visualizations and algorithms involving the challenging dataset of scientific text. You will contribute to advancing techniques for finding deep structural (analogical) relations between scientific concepts that go beyond simple keyword matching based on surface similarity. You will also contribute to scaling these techniques to millions of scientific papers to support interactive visualizations. To evaluate the visualizations, we will conduct user studies measuring how such interfaces might support scientists’ creativity.

    • Interest and experience in building Web-based interactive systems.
    • Prior knowledge in Natural Language Processing technologies is preferred but not required.
    • We use D3.js for data visualization, and Tensorflow and Apache Beam for developing and deploying machine learning models.

    Research Areas in Which This Project Falls:

    • Applied Machine Learning
    • Artificial Intelligence (AI)
    • Data Visualization
    • Methods
    • Scientific Collaboration
    • User Experience (UX)

    Project Description: Autonomous vehicles have the potential to be designed to increase the mobility of those who have physical, sensory, or cognitive disabilities. We are working to develop new design ideas for how to make autonomous cars best serve those with disabilities. To develop new solutions, we are working with members of the disability community and transportation advocates to understand people's mobility needs, current challenges, and hopes for the future. We are looking for student researchers interested in conducting user research, design, and prototyping around making autonomous cars accessible.

    • Experience conducting user studies
    • Qualitative research (interviews, surveys, focus groups)
    • Participatory design
    • Some experience with electronics
    • Basic experience with programming
    • Some Fabrication experience

    Research Areas in Which This Project Falls:

    • Accessibility
    • Artificial Intelligence (AI)
    • Design Research
    • Inter of Things (IoT)
    • Sensors
    • Service Design
    • User Experience (UX)

    Project Description: When learning to use a new piece of software, developers typically spend a large amount of time reading, understanding and trying to answer questions with online materials. To help developers keep track of important, confusing, and useful information with the intent of sharing their learning with future developers, we developed a social annotation tool, Adamite, a Google Chrome extension. For more information, you can read our recently-submitted paper: https://horvathaa.github.io/public/publications/adamite-submission-deano. . While Adamite is successful as an extension, we want you to help us refine Adamite’s current features, and design and build new features depending upon your own interests. Possible project focuses may include: extending Adamite to work in a code editor like Visual Studio Code or on a mobile device, adding new features to make annotations even easier and more beneficial to the author and for subsequent users such as using intelligent techniques to cluster related annotations, studying developers’ actual usage of annotations during a software learning task, or porting Adamite for entirely new domains beyond programming, like shopping. Working on this project may result in being an author on a publication at a prestigious conference such as CHI, UIST, or CSCW, and we are hoping to release Adamite as an open source project for general use.

    • Required Skills: Some web development experience (e.g., one completed introductory web development course) OR some design and prototyping experience (e.g., experience using Figma or Adobe Creative suite tools) OR experience performing user studies (e.g., interviews, survey design, A/B lab studies)
    • Performing qualitative data analysis (e.g., grounded theory)
    • Preferred skills: Experience using React

    Research Areas in Which This Project Falls:

    Project Description: Artificial intelligence (AI) systems are increasingly used to assist humans in making high-stakes decisions, such as online information curation, resume screening, mortgage lending, police surveillance, public resource allocation, and pretrial detention. While the hope is that the use of algorithms will improve societal outcomes and economic efficiency, concerns have been raised that algorithmic systems might inherit human biases from historical data, perpetuate discrimination against already vulnerable populations, and generally fail to embody a given community's important values. Recent work on algorithmic fairness has characterized the manner in which unfairness can arise at different steps along the development pipeline, produced dozens of quantitative notions of fairness, and provided methods for enforcing these notions. However, there is a significant gap between the over-simplified algorithmic objectives and the complications of real-world decision-making contexts. This project aims to close the gap by explicitly accounting for the context-specific fairness principles of actual stakeholders, their acceptable fairness-utility trade-offs, and the cognitive strengths and limitations of human decision-makers throughout the development and deployment of the algorithmic system.

    To meet these goals, this project enables close human-algorithm collaborations that combine innovative machine learning methods with approaches from human-computer interaction (HCI) for eliciting feedback and preferences from human experts and stakeholders. There are three main research activities that naturally correspond to three stages of a human-in-the-loop AI system. First, the project will develop novel fairness elicitation mechanisms that will allow stakeholders to effectively express their perceptions on fairness. To go beyond the traditional approach of statistical group fairness, the investigators will formulate new fairness measures for individual fairness based on elicited feedback. Secondly, the project will develop algorithms and mechanisms to manage the trade-offs between the new fairness measures developed in the first step, and multiple existing fairness and accuracy measures. Finally, the project will develop algorithms to detect and mitigate human operators' biases, and methods that rely on human feedback to correct and de-bias existing models during the deployment of the AI system.

    • Preferred but not required: Working with the project team to plan and conduct research studies, including interviews, user studies, surveys, design workshops, or behavioral experiments.
    • Preferred but not required: Analyzing and interpreting data collected from research studies, in collaboration with other project team members.
    • Preferred but not required: Ideating and designing new tools to improve fairness in machine learning practice.

    Research Areas in Which This Project Falls:

    • Applied Machine Learning
    • Artificial Intelligence (AI)
    • Social Computing
    • Societal Problems

    Lead Mentor: Alexandra Ion

    Project Description: We are looking to push the boundaries of mechanical metamaterials by unifying material and device. Metamaterials are advanced materials that can be designed to exhibit unusual properties and complex behavior. Their function is defined by their cell structure, i.e., their geometry. Such materials can incorporate entire mechanisms, computation, or re-configurable properties within their compliant cell structure, and have applications in product design, shape-changing interfaces, prosthetics, aerospace and many more.

    In this project, we will develop design tools that allow novice users and makers to design their own complex materials and fabricate them using 3D printing or laser cutting. This may involve playfully exploring new cell designs, creating novel application examples by physical prototyping and developing open source software.

    • CS skills: software development, background in geometry, optimization, and/or simulation
    • 3D modeling basics (CAD tools, e.g., Autodesk Fusion 360 or similar)
    • Basic knowledge of classical mechanics or material science

    You don’t have to cover all skills, since this will likely be a group project. We are looking for diverse teams with complementary skills.

    Research Areas in Which This Project Falls:

    Project Description: People love products and services that use artificial intelligence (AI) to make the product work better. Today, more and more companies are searching for ways to do this. From spam filters that save people time and attention to recommenders that make it easier to find something of interest to conversational agents that offer a more natural way to interact with a computer to fully functioning smart homes and driverless cars, AI can make things better.

    Unfortunately, UX designers, the people most often asked to come up with new ideas for products and services, really struggle when trying to innovate with AI. These professionals often fail to notice the many simple ways that AI can make people’s interaction better. In addition, when they do try to envision new things, they most often generate ideas for things that cannot be built. Our work focuses on helping UX designers to become better at envisioning what AI can do and then communicating their ideas to development teams.

    Our work addresses this challenge in three ways. First, we are making resources to help designers better understand AI’s capabilities and its dependency on labeled datasets. Second, we are making new design tools that scaffold designers in thinking through what interaction with a probabilistic system might be like, a system that can make inference errors. Third, we are working with professional designers working in industry to better understand their work practices and to identify the best time and place for them to use the resources and the tools.

    We are looking for research assistants to help us with our research. This work will involve:
    1. Designing user interfaces that make our AI resources available to designers. These include a taxonomy of AI capabilities and a collection of AI interaction design patterns.
    2. Designing user interfaces for new tools that help designers recognize when they should search for opportunities to use AI to enhance their designs.
    3. Conducting interviews and participating in workshops with professional designers who want to get better at working with AI.

    Students working on this project will learn about AI capabilities from a UX design perspective, and they will develop resources for designers to leverage AI opportunities in their work.

    • Design Research (e.g., user interviews, design workshops, affinity diagrams)
    • User interface design (sketching and prototyping of nobel UIs)
    • Interest in Human-AI Interaction
    • Strong organizational skills, reliable, self-motivated
    • Education in data science, analytics, data mining, and or experience working with user telemetry data

    Research Areas in Which This Project Falls:

    • Artificial Intelligence (AI)
    • Design Research
    • Service Design
    • Tools
    • User Experience (UX)

    Project Description: Looking for a student interested in interaction design and/or AI to help with the Apprentice Learner Framework, and Smart Sheet. The goal of these two project are to build an AI system that can be used for rapid Intelligent Tutoring System authoring. The Apprentice Learner learns to solve problems via a humans' demonstrations and correctness feedback and in turn produces a set of rules that can be used as a tutoring system for students. SmartSheet aims to use AL in conjunction with handwriting recognition to make handwriting recognition capable tutoring systems built via a tablet and stylus based interface. We are also interested in hosting students interested in expanding the Apprentice Learner Framework in general, including for the purposes of cognitive modeling.

    • Requirement: Solid programming skills (mostly Python, Javascript, would also be helpful)
    • Preferred: Some interest/experience with machine learning
    • Optional: UX design/implementation skills

    Research Areas in Which This Project Falls:

    • Applied Machine Learning
    • Artificial Intelligence (AI)
    • Intelligent Tutoring Systems
    • User Experience (UX)

    Lead Mentor: David Lindlbauer

    Project Description: Augmented Reality and Virtual Reality offer interesting platforms to re-define how users interact with the digital world. It is unclear, however, what the requirements in terms of usability and interaction are to avoid overloading users with unnecessary information. In this project, we will build on existing machine learning approaches such as saliency prediction that leverage insights into human visual perception. The goal is to create computational approaches to improve the applicability and usefulness of AR and VR systems.

    • Strong technical background
    • Some experience with 3D editors (e.g. Unity, Unreal) and 3D programming
    • Some familiarity with Computer Vision

    Research Areas in Which This Project Falls:

    • Applied Machine Learning
    • Augmented Reality (AR)
    • Context-Aware Computing
    • Virtual Reality (VR)

    Project Description: Vega-Lite is a high-level grammar of interactive graphics. It provides a concise, declarative JSON syntax to create an expressive range of visualizations for data analysis and presentation. It is used by thousands of data enthusiasts, ML scientists, and companies around the world. We have a number of projects around adding new features to the visualization toolkits that are going to be part of the open-source tool. Please take a look at https://docs.google.com/document/d/1fscSxSJtfkd1m027r1ONCc7O8RdZp1oGABwca2pgV_E and the issue trackers for some specific project ideas we could work on.

    • Experience with web-development (JavaScript) and Git
    • Experience with data visualization, TypeScript, D3, and Vega are a plus

    Research Areas in Which This Project Falls:

    Project Description: In this project you will help in the design of visual and experimental aspects of a decimal number learning game, Decimal Point (http://www.cs.cmu.edu/

    bmclaren/projects/DecimalPoint/), to prepare it for new classroom studies. You will work with a professor and researchers who do learning science studies in middle school classrooms. You should have design skills, both to help with new artwork in the game but also to prepare for the classroom studies. An ideal candidate will be someone with a psychology and/or design/art background.

    Research Areas in Which This Project Falls:

    • Education
    • Games
    • Learning Sciences and Technologies
    • Social Good
    • User Experience (UX)

    Project Description: In this project you will write and revise code to alter an existing decimal number learning game, Decimal Point (http://www.cs.cmu.edu/

    bmclaren/projects/DecimalPoint/), to prepare it for new classroom studies. You will work with a professor and researchers who do learning science studies in middle school classrooms. You’ll learn about those studies, as well as practice important professional technical skills, such as using source code repositories and engaging in good software engineering practice. Preferred, but not necessary, is that you will have skills in HTML5/CSS3/JavaScript. Familiarity with Angular or AngularJS is also desirable.

    • Programming skills (e.g., Python, Java)
    • Knowledge of HTML5/CSS3/JavaScript
    • Knowledge of Angular or AngularJS
    • A desire to work with a fun research team!

    Research Areas in Which This Project Falls:

    • Education
    • Games
    • Intelligent Tutoring Systems
    • Learning Sciences and Technologies
    • Social Good
    • User Experience (UX)

    Project Description: Students will learn the best when offered the timely, proper scaffoldings that correspond to their current knowledge level. With AI-based intelligent tutoring systems that can calculate students current skill mastery of certain knowledge, it is possible to offer personalized, adaptive practice based on each students’ prior knowledge. This may better prepare students up to the next-to-be-learned knowledge, reduce the time of them struggling unproductively or practicing knowledge they have already mastered, and increase their learning efficiency through more personalized, focused and targeted practice. Such adaptive practice of prior knowledge also holds the potential to bridge the knowledge gap between different students, and promote education equity. In this project, we are interested in exploring the design space of building such a system/ algorithm.

    • Preferred skills in web-based technologies development
    • Preferred skills HCI/design skills, especially related to web-applications
    • Preferred skills in AI/machine learning/data mining

    Research Areas in Which This Project Falls:

    • Artificial Intelligence (AI)
    • Design Research
    • Education
    • Intelligent Tutoring Systems
    • Leaning Sciences and Technologies
    • Social Good
    • User Experience (UX)

    Project Description: In research studies, we found out that Lynnette, an AI-based tutoring system for middle-school equation solving, is very effective in helping students learn. To take Lynnette to the next level, we are now working to make it more engaging, by gamification elements such as a space theme, a badge system, achievements, and a narrative context. We also added a drag-and-drop interaction format for equation solving, for variety and smooth interactions. We would like to make Lynnette even more engaging. We have a variety of ideas, including personalized dashboards, culturally-adaptive story problems, and adapting instruction to social and meta-cognitive factors (e.g., sense of belonging in classroom, self-efficacy). We are open to other ideas as well. The work involves design brainstorming, co-design with middle-school students, prototyping, and trying out prototypes with students.

    • Preferred skills: Design/HCI, prototyping, web development
    • Experience with game design, educational technology, and designing for teenagers is not required but would be considered a plus

    Research Areas in Which This Project Falls:

    • Design Research
    • Education
    • Games
    • Intelligent Tutoring Systems
    • Learning Sciences and Technologies
    • User Experience (UX)

    Project Description: Leveraging stimuli responsive actuators and smart materials to develop wearables and second-skin applications that sense and actuate. We look to investigate novel fabrication processes, building novel manufacturing tools, and looking into new structures and mechanisms of physical prototypes. Students are encouraged to look for a balance of impactful application and fundamental research.

    • We are looking for candidates who have hands-on design and making experiences.
    • If you have digital fabrication experiences, or hands-on craft/making projects, please send an email and share your portfolio to Prof. Lining Yao: liningy [at] andrew.cmu.edu

    Research Areas in Which This Project Falls:

    Project Description: Over a trillion hours per year are spent searching for and making sense of complex information. For a lot of this "sensemaking," search is just the beginning: it’s also about building up your landscape of options and criteria and keeping track of all you’ve considered and learned as you go. We've found through more than a decade of research that existing tools don't support this messy process. So we are building a tool that does.

    • Looking for undergraduates with background and interest in one or more of:
      • Design, both visual and interaction
      • UX research
      • Customer discovery and lean methods
      • Front end programming (especially with React/Firebase experience)

      Research Areas in Which This Project Falls:

      Project Description: Our research experience in K-12 classrooms equipped with AI-based tutoring systems has shown that, even though the software is coaching each student individually with adaptive guidance, teachers still play an enormous role in student learning. For example, we have seen teachers in these classrooms team up students on the fly (e.g., a student who has learned a lot already with one who is struggling), for brief one-on-one extra aid. For teachers to be most effective, however, they must be able, in real time, to see how their students are doing (struggling, disengaged, very far into covering the learning objectives,, etc.). This information will enable them to take action to aid those who need help. We are creating tools by which middle-school teachers can effectively orchestrate activity in these classrooms without taking their attention away from the students. The tools use wearable and mobile devices with artificial intelligence to turn the voluminous data generated by the software into actionable diagnostic information and recommendations for teachers. In a new project, we plan to provide tutoring software that supports both individual and collaborative problem-solving by students, to enhance the effectiveness of the spontaneous teaming up of students we have observed. We seek student interns interested in any of the many efforts needed to make these technologies work, such as testing and refining collaborative ITSs with students doing data mining to create new learning analytics that inform teachers about how their students are doing and design-based research for designing and prototyping intuitive, comprehensible visualizations for teachers, for different hardware options, and trying them out with teachers.

      • HCI/design skills (preferred)
      • AI/machine learning/data mining (preferred)
      • Web technologies (preferred)

      Research Areas in Which This Project Falls:

      • Applied Machine Learning
      • Artificial Intelligence (AI)
      • Data Visualization
      • Design Research
      • Intelligent Tutoring Systems
      • Learning Sciences and Technologies
      • User Experience (UX)

      Project Description: In this project we are developing an interactive prototype for motivational interviewing training using a human-centered design approach. Motivational Interviewing (MI) is an effective therapeutic technique to support behavior change, but training is often time consuming and its effectiveness diminishes over time.

      Students on this project will work in a team to design and develop a prototype for time effective interactive MI training useful both for initial training and refreshers. Our prototype will build on our research to identify the unique challenges nurses face in learning MI and that MI newcomers struggle with building rapport, analyzing the problem, and promoting readiness for change during therapeutic interactions.

      • Interest in interactive prototype development, interaction design, psychology and/or mental health
      • Web programming (front-end or back-end)
      • Familiarity with inVision, Sketch, or other graphic design tools
      • Familiarity with Python or Javascript

      Research Areas in Which This Project Falls:

      • Design Research
      • Education
      • Games
      • Healthcare
      • Social Computing

      Project Description: Macroinverbrates.org: The Atlas of Common Freshwater Macroinvertebrates of the Eastern United States has been successfully launched as a definitive teaching and learning collection and online guide for freshwater macroinvertebrate identification with annotated key diagnostic characters marked down to family and genus for the 150 most commonly used taxa in citizen science water quality assessment and education/ This year we are completing the design, development, usability testing, evaluation and release of a fully downloadable mobile app for both Android and iOS operating systems that includes a Aquatic Insect Field Guide, Interactive Identification key (Orders) mode, and self-practice quizzing and games feature.

      • Design and evaluation skills (interviewing, usability testing, qualitative analysis skills)
      • Copy writing and tutorial video production
      • Technical Skills: android and iOS development in React Native.

      Research Areas in Which This Project Falls:

      • Design Research
      • Education
      • Learning Sciences and Technologies
      • Scientific Collaboration

      Project Description: NoRILLA is a project based on research in Human Computer Interaction at Carnegie Mellon University. We are developing a new mixed-reality educational system bridging physical and virtual worlds to improve children's STEM learning and enjoyment in a collaborative way. It uses depth camera sensing and computer vision to detect physical objects and provide personalized immediate feedback to children as they experiment and make discoveries in their physical environment. NoRILLA has been used at many school districts, Children's Museum of Pittsburgh, Carnegie Science Center and informal play spaces like IKEA and Bright Horizons. Research with hundreds of children has shown that it improves children's learning by 5 times compared to equivalent tablet or computer games. We have recently received an NSF (Advancing Informal STEM Learning) grant in collaboration with Science and Children's Museums around the country to expand our Intelligent Science Stations/Exhibits and develop the AI technology further. Responsibilities will include taking the project further by developing new modules/games, computer vision algorithms and AI enhancements on the platform and deployment of upcoming installations, as well as participating in research activities around it.

      • The project has both software and hardware components.
      • Familiarity with computer vision, Processing/Java, game/interface development and/or robotics is a plus.

      Research Areas in Which This Project Falls:

      • Artificial Intelligence (AI)
      • Augmented Reality (AR)
      • Education
      • Games
      • Learning Sciences and Technologies
      • Societal Good

      Project Description: Past research shows that AI-based tutoring systems--software that coaches students step-by-step as they try to solve complex problems--can be very helpful aids to learning, for example in middle school and high school mathematics learning. The effectiveness of tutoring software has been shown in many different subject areas, and some have reached commercial markets and are in daily use in many schools. Inspired by these results, we have developed authoring tools that make the creation of tutoring software much easier--in many cases, it can be done entirely without programming, opening the door to a wide range of authors. Instructors in fields such as math and physics have used our tools to create tutoring software for their classes. We are now creating a new version of our authoring tools with the knowledge we have gained over the 20 years since their first release. We seek to make them intuitive, clear, efficient and readily available--without requiring complicated installation or advance instruction. We want to improve both the creation of the student-facing user interface and the specification of the tutoring knowledge and behavior behind it. We want to support teams of teachers collaboratively creating these self-coaching, self-grading online activities that they might assign instead of conventional homework.

      • The work involves user-centered design of authoring tools, web programming, building tool prototypes, and trying them out with tutor authors of various background.

      Research Areas in Which This Project Falls:

      • Artificial Intelligence (AI)
      • Education
      • End-User Programming
      • Intelligent Tutoring Systems
      • Learning Sciences and Technologies
      • User Experience (UX)

      Project Description: A key trend emerging from the popularity of smart, mobile devices is the quantified-self movement. The movement has manifested into the prevalence of two kinds of personal wellness devices: (1) fitness devices (e.g., FitBit), and (2) portable and connected medical devices (e.g., Bluetooth-enabled blood pressure cuffs). The fitness devices are seamless, very portable, but offer low-fidelity information to the user. They do not generate any medically-relevant data. We are currently working on building personal medical devices that are as seamless to use as a FitBit but generate medically-relevant data. We are looking for students to contribute to various aspects of this project. Depending on their interest, the students can help in building and prototyping the mobile app, hardware device, or they can contribute to the signal processing and machine learning component.

      Some subset of these skills would be useful:

      • Programming experience in Java and/or Python
      • Mobile programming
      • Machine learning
      • Hardware prototyping

      Research Areas in Which This Project Falls:

      • Applied Machine Learning
      • Healthcare
      • Internet of Things (IoT)
      • Sensors
      • Societal Problems
      • Wearables

      Project Description: While privacy is an important element for smart home products, it takes a great deal of effort to design and develop features to support end-user privacy. These individually built features result in distributed and non-consistent interfaces, further imposing challenges for user privacy management. Peekaboo is a new IoT app development framework that aims to make it easier for developers to build privacy-sensitive smart home apps through the intermediary of a smart home hub, while simultaneously offering architectural support for building centralized and consistent privacy features across all the apps. At the heart of Peekaboo is an architecture that allows developers to factor out common data preprocessing functions from the cloud service side onto a user-controlled hub and supports these functions through a fixed set of reusable, chainable, open-source operators. These operators then become the common structure of all the Peekaboo apps. Privacy features built on these operators become native features in the Peekaboo ecosystem.

      Research Areas in Which This Project Falls:

      • End-User Programming
      • Internet of Things (IoT)
      • Security and Privacy
      • Sensors

      Project Description: Personalized Learning² (PL²) is an initiative addressing the opportunity gap for marginalized students through personal mentoring and tutoring with artificial intelligence-powered learning software. (personalizedlearning2.org)

      • The student researcher will contribute to designs that enhance the general usability of the PL² web application
      • The student researcher will need to draw on their UI/UX and Communication Design skills and prior experience to create mock-ups to be presented for team approval
      • Ideally the candidate will also be able to conduct user interviews to determine their needs and contribute to the creation of demo videos
      • Front-end programming experience (HTML, CSS, Javascript) is a nice-to-have but not required

      Research Areas in Which This Project Falls:

      Project Description: Multiple positions are open for summer undergraduate research assistants on an NSF-funded project Smart Spaces for Making: Networked Physical Tools to Support Process Documentation and Learning. Candidates are sought for two roles:1.Design Research(skills: conducting interviews, qualitative research analysis)2.Technical Prototype Development(skills: hardware prototyping, server-side programming) Research assistants will work with an interdisciplinary team of design researchers, technology developers and learning scientists from CMU’s HCII, School of Architecture and the University of Pittsburgh’s Learning Research and Development Center (LRDC) to develop smart documentation tools to support learning practices in creative, maker-based studio environments. Students will participate in design research and development activities working with project site partners including Quaker Valley High School, CMU’s IDeATe program, and AlphaLab Gear’s Startable Youth program. For more details on the project: https://smartmakingtools.weebly.com

      • Design Research (skills: contextual inquiry, interviewing, design probes and qualitative research analysis)
      • Technical Prototype Development(skills: hardware prototyping, server-side programming)

      Research Areas in Which This Project Falls:

      • Education
      • Internet of Things (IoT)
      • Learning Sciences and Technologies
      • Methods

      Project Description: This project leverages theory from psychology and known social influence principles to improve cybersecurity behavior and enhance security tool adoption. Students on this project will work on designing and developing web-based security-related mini games or interventions that incorporate cybersecurity training and information into people’s everyday workflows. They may also have an opportunity to conduct evaluations of these mini games or interventions.

      • Interest in interaction design, psychology and/or cybersecurity
      • Web programming (front-end or back-end)
      • Familiarity with inVision, Sketch, or other graphic design tools
      • Familiarity with Python or Javascript

      Research Areas in Which This Project Falls:

      • Design Research
      • Games
      • Security and Privacy
      • Social Computing
      • Societal Problems

      Project Description: This project focuses on creating a culturally-responsive programming curriculum for underrepresented middle schoolers of color in a coding camp. REU students will participate in creating, refining, and assessing a programming curriculum. We look forward to working with students who would like to pursue this area of work. It is favorable if REU students have: an excitement to promote diversity in and accessibility to computing, a strong interest in and/or prior experience in data analysis, motivation and time management skills, and some experience in programming.

      Research Areas in Which This Project Falls:

      • Education
      • Human-Robot Interaction (HRI)
      • Learning Sciences and Technologies
      • Social Good

      Project Description: In this project, we explore how social robots can be used in out-of-school learning environments to empower Black, Latinx, and Native American middle school girls in computer science. We are using a culturally-responsive computing paradigm to examine how to reflect the learner's identity in the robot, towards the goal of improving learning outcomes, and self-efficacy in computer science. We are looking for motivated REU students, interested in exploring methods of endowing the robot with social characteristics and building rapport between the robot and the learner. REU students will work on data collection and/or analysis, and interaction design. Qualified students should have experience in programming/computer science as well as design/HCI or human-robot interaction.

      Research Areas in Which This Project Falls:

      • Design Research
      • Education
      • Human-Robot Interaction (HRI)
      • Learning Sciences and Technologies
      • Social Good

      Project Description: Research on the human factors of cybersecurity often treats people as isolated individuals rather than as social actors within a web of relationships and social influences. We are developing a better understanding of social influences on security and privacy behaviors across contexts. Students on this project will conduct interviews and surveys on people's cybersecurity behaviors in different types of relationships. They may also have the opportunity to develop and prototype new system design ideas based on research findings.

      • Coursework in psychology, design or cybersecurity
      • Interest in developing interviewing and survey design skills
      • Experience with statistical analysis techniques or packages such as R
      • Interest in interaction design

      Research Areas in Which This Project Falls:

      • Design Research
      • Security and Privacy
      • Social Computing
      • User Experience (UX)

      Project Description: This project (SEME) hopes to design a feasible and sustainable chatbot to mentor teachers in order to support them in their implementation of teacher training methods in rural Ivory Coast. We will be building the chatbot on Facebook and deploying it at scale for the Fall of 2021. (Details: https://seme-cmu.github.io/seme-web/). We are looking for REU students interested to work on the design, data analysis, and development aspects of the project.

      Research Areas in Which This Project Falls:

      • Developing World
      • Learning Sciences and Technologies
      • Social Good
      • Societal Problems

      Project Description: Today, algorithmic decision-support systems (ADS) guide human decisions across a growing range of high-stakes settings – from predictive risk models used to inform child welfare screening decisions, to AI-based classroom tools used to guide instructional decisions, to data-driven decision aids used to guide mental health treatment decisions. While ADS hold great potential to foster more effective and equitable decision-making, in practice these systems often fail to improve, and may even degrade decision quality. Human decision-makers are often either too skeptical of useful algorithmic recommendations (resulting in under-use of decision support) or too reliant upon erroneous or harmfully biased recommendations (adhering to algorithmic recommendations, while discounting other relevant knowledge). To date, scientific and design knowledge remains scarce regarding how to foster appropriate levels of trust and productive forms of human discretion around algorithmic recommendations.

      In this project, we will investigate how to support more responsible and effective use of ADS in real-world contexts where these systems are already impacting the lives of children and families (e.g., mental healthcare, child welfare, and K-12 education). We will create new interfaces and training materials to help both practitioners and affected populations in these contexts decide: (1) when and how much to adhere to algorithmic recommendations, and (2) how to act upon or communicate (dis)trust. In addition, we will conduct experiments to evaluate the impacts different interface and training designs have on human–algorithm decision-making.

      • Experience conducting research with human subjects (e.g., user studies, design workshops, field studies, experiments) is preferred, but not required.
      • Experience with front-end design and/or development is a preferred, but not required.
      • Interests and/or background in any of the following areas are a plus: design, cognitive science, anthropology, decision science, statistics, artificial intelligence, machine learning, psychology, learning sciences.

      Research Areas in Which This Project Falls:

      • Applied Machine Learning
      • Artificial Intelligence (AI)
      • Data Visualization
      • Design Research
      • Education
      • Ethics
      • Healthcare
      • Learning Sciences and Technologies
      • Social Good
      • Societal Problems

      Project Description: This project is seeking an undergraduate researcher with skills in HCI, design research, and/or social computing to conduct research on the impact of automation in the hospitality industry. This role is primarily dedicated to study design, data collection, and project management related to research on the Future of Work. This collaboration creates a unique opportunity for innovation focused on identifying challenges hospitality workers experience due to emerging technology, defining the pipeline of innovations in hospitality, evaluating workforce issues including training challenges and deficiencies, and examining policy questions and options.


      Introduction

      During the last century, research has been increasingly drawn toward understanding the human–nature relationship (1, 2) and has revealed the many ways humans are linked with the natural environment (3). Some examples of these include humans’ preference for scenes dominated by natural elements (4), the sustainability of natural resources (5, 6), and the health benefits associated with engaging with nature (7𠄹).

      Of these examples, the impacts of the human–nature relationship on people’s health have grown with interest as evidence for a connection accumulates in research literature (10). Such connection has underpinned a host of theoretical and empirical research in fields, which until now have largely remained as separate entities.

      Since the late nineteenth century a number of descriptive models have attempted to encapsulate the dimensions of human and ecosystem health as well as their interrelationships. These include the Environment of Health (11), the Mandala of Health (12), the Wheel of Fundamental Human Needs (13), the Healthy Communities (14), the One Health (15), and the bioecological systems theory (16). Each, however, have not fully incorporated all relevant dimensions, balancing between the biological, social, and spatial perspectives (17, 18). In part this is due to the challenges of the already complex research base in relation to its concept, evidence base, measurement, and strategic framework. Further attention to the complexities of these aspects, interlinkages, processes, and relations is required for a deeper sense of understanding and causal directions to be identified (19).

      This article reviews the interconnectivities between the human–nature relationship and human health. It begins by reviewing the each of their concepts and methodological approaches. These concepts will be converged to identify areas of overlap as well as existing research on the potential health impacts in relation to humanity’s degree of relationship to nature and lifestyle choices. From this, a developing conceptual model is proposed, to be inclusive of the human-centered perspective of health, viewing animals and the wider environment within the context of their relationship to humans. The model combines theoretical concepts and methodological approaches from those research fields examined in this review, to facilitate a deeper understanding of the intricacies involved for improving human health.


      How to constrain theory to match the interactionally relevant facts

      To test theories in an ecologically valid way, it is important to distinguish between the facts available to participants in the context of an interaction and those that may become available to researchers in the context of analysis. A related distinction is often made in the philosophy of science between ‘contexts of discovery’ and ‘contexts of justification’ (Schickore & Steinle, 2006), although there are many field-specific interpretations and applications of this distinction (Hoyningen-Huene, 2006). In the context of a broader project to improve research reproducibility, Nosek et al. (2018) suggest this distinction is equivalent to the differences between hypothesis-generation and hypothesis-testing, inductive and deductive methods, or exploratory and confirmatory research. However, here we argue that a particular interpretation of this distinction should be used in the field of human interaction research, and suggest that this interpretation is especially useful for constraining the process of theorizing in ways that can improve ecological validity.

      Distinguish between contexts of discovery from contexts of justification

      The ‘context of discovery’ is the situation in which a phenomenon of interest is first encountered. For example, when studying human interaction, a useful context of discovery would be an everyday conversation that happened to be recorded for analysis (Potter, 2002). ‘Contexts of justification’, in this example, might include the lab meeting, the conference discussion, and the academic literature within which the empirical details are reported, analyzed, and formulated as a scientific discovery (Bjelic & Lynch, 1992). Table 1 lists some resources for making sense of an interaction that either participants or analysts can use when discovering and justifying interactional phenomena. The third column shows some interactional resources that are available from both perspectives. For example, both participants and overlooking analysts can use observable features of the setting and the visible actions of the people within it to discover new phenomena. Both participants and analysts can also detect when these actions are produced smoothly, contiguously, and without uninterruption (Sacks, 1987). Both can see if certain actions are routinely matched into patterns of paired or ‘adjacent’ initiations and responses (Heritage, 1984, p. 256). Similarly, both analysts and participants can observe when flows of initiation and response seem to break down, falter, or require ‘repair’ to re-establish orderliness and ongoing interaction (Schegloff, Jefferson, & Sacks, 1977). By contrast, many other resources and methods for making sense of the situation are exclusively available from one perspective or the other. For example, analysts can repeatedly listen to a recording, slow it down, speed it up, and can precisely measure, quantify, and deduce cumulative facts that would be inaccessible to participants in the interaction. Similarly, participants may draw on tacit knowledge and use introspection—options which are not necessarily available for overlooking analysts—to make sense of the current state and likely outcomes of the interaction. The risk of ignoring these distinctions is that theories about how people make sense of social interaction can easily become uncoupled from empirical evidence about what the participants themselves treat as meaningful through their behavior in the situation (Garfinkel, 1964 Lynch, 2012).

      Participants’, analysts’ and shared resources between contexts of discovery and justification.

      Context . Participants’ resources . Analysts’ resources . Shared resources .
      DiscoveryKnowledge & experience beyond current interaction Ability to fast forward, rewind, & replay interactions Observable social actions & settings
      JustificationIntrospection, inductive reasoning Quantification & deductive analysis Sequential organization of talk & social action
      Context . Participants’ resources . Analysts’ resources . Shared resources .
      DiscoveryKnowledge & experience beyond current interaction Ability to fast forward, rewind, & replay interactions Observable social actions & settings
      JustificationIntrospection, inductive reasoning Quantification & deductive analysis Sequential organization of talk & social action

      Consider participants’ situational imperatives

      In order to ecologically ground theories in the context of interaction, we should constrain our theorizing to take account of what can be tested using the different kinds of evidence and methods available to both analysts and participants. Analysts should try to harness as many resources from the participants’ ‘context of discovery’ as possible, but it is also important that they take into account how the participants’ involvement in the situation is motivated by entirely different concerns. The drinker and the bartender do not usually go to a bar to provide causal explanations for interactional phenomena discovered in that setting for the benefit of scientific research. Their actions are mobilized by the mutual accountability of one person ordering a drink and the other person pouring it. As (Garfinkel, 1967, p. 38) demonstrates, failure to fulfill mutually accountable social roles can threaten to ‘breach’ the mutual intelligibility of the situation itself. Bartenders who fail to recognize the behavior of thirsty customers risk appearing inattentive or unprofessional. In an extreme case, failing to behave as a bartender may lead to getting fired and actually ceasing to be one. Similarly, customers who fail to exhibit behaviors recognizable as ordering a drink risk remaining unserved or, in an extreme case, being kicked out of the bar. If neither participant upholds their interactional roles, the entire situation risks becoming unrecognizable as the jointly upheld ordinary activity of ‘being in a bar’ (Sacks, 1984b). Interactional situations have this reflexive structure: they depend on participants behaving in certain ways in order to make the situation recognizable as the kind of situation where that kind of behavior is warranted. This makes it especially important to ground theories about interaction with reference to the resources and methods that are accessible to participants in the situation, and to take account of participants’ situational imperatives.

      Focus on reciprocal interactional behaviors

      Theories about interaction, then, should focus on whatever people in a given interactional situation discover and treat as relevant through their actions. For participants in an interaction what counts as a ‘discovery’ is any action that they, in conversation analytic terms, observably orient towards and treat as relevant in the situation. Justification in the participants’ terms, then, consists of doing the necessary interactional ‘work’ to demonstrate their understanding and make themselves understood to others (Sacks, 1995, p. 252). When people interact they display their understandings and uphold the intelligibility and rationale of their actions (Hindmarsh, Reynolds, & Dunne, 2011). This reflexive process upholds the intelligibility of the social situation they’re currently involved in: an imperative that Garfinkel (1967) describes as ‘mutual accountability’. In our bar example, prospective drinkers and bartenders monitor one another’s behavior and discover, respectively, who is going to serve a drink, and who needs one. The resources they may rely on in order to make these discoveries include their bodily positions, head and gaze orientation, speech, and gesture. Each participant may also rely on cultural knowledge and prior experience of this kind of situation. However, these tacit resources are not directly accessible—neither to the other participants, nor to the overlooking analysts. Similarly, analysts could code and quantify any visible bodily movements then use statistical methods for ‘exploratory data analysis’ (Jebb, Parrigon, & Woo, 2016) to develop a theory. This could be very misleading and ecologically invalid though, since this form of analysis is not something participants could to use as a resource to make sense of the situation, and it doesn’t necessarily take account of their displays of mutual observation and accountability. Theories about behavior in bars, therefore, should start by trying to explain this situation using only resources that are mutually accessible to participants and analysts. These resources could include any reciprocal interactional behaviors such as how drink-offerings and drink-requests are linked in closely timed sequences of social interaction.