NASA Spacesuit AR system

The NASA S.U.I.T.S. Challenge is an annual competition focused on designing and developing an augmented reality HUD (Head-Up Display) interface for astronauts participating in future NASA Artemis missions.

National Award Finalist for NASA S.U.I.T.S Challenge

Project Type
Augmented Reality
Na

Timeline
2023 Sep - 2024 May

My Role
Developer

Team
RISD Astro

Overview

The NASA S.U.I.T.S. Challenge is an annual competition focused on designing and developing augmented reality head-up display (HUD) interfaces for astronauts participating in future NASA Artemis missions. The RISD Astro team has been selected as one of the top 10 national finalists for their innovative approach to augmented reality integration.

Background

As a selected team for SUITS challenge by NASA, the team designed user interfaces (UI), and the underlying software for spacesuits, and new for this year, a mission control console for use on the surface of Mars. The spacesuit  UI will be deployed to a commercially available head-mounted passthrough AR device, or HMD, such as the Microsoft HoloLens 2 or Magic Leap. The new Local Mission Control Console (LMCC) will work on a standard dual monitor setup and allow teams to interpret what will be needed in a Martian outpost. We later tested our software at NASA's JSC Rock Yard, where we demonstrated our solutions to NASA.

-> During on-site testing, NASA astronaut Anil Menon tested our product and visited our team’s Mission Control Center.

Problem

How can we design an Augmented Reality that efficiently aids task completion during lunar exploration missions?

Mission Concept

The software is consist of four main features required during Mars exploration.

1. Egress

2. Navigation

3. Repair

4. Geological Sampling

5. LMCC

Egress initiates astronauts’ lunar exploration. The interface must indicate proper protocols to astronauts, ensuring stable communication with the Telemetry Stream (TSS) from the control center.



Final Product

  1. Hand Menu

Accessible at the flip of the hand, this menu takes into account limited tactile mobility, ensuring both accessibility and task efficiency for astronauts.

  1. Egress

The interface displays step-by-step protocols, updating in real-time as astronauts complete tasks. Completed steps are marked with a checkmark, ensuring clear progress tracking and focus.

  1. Map - Pan

Navigation mode can be toggled on and off via the palm menu. A clear visual hierarchy is essential for the various functions within the navigation system to ensure usability.

When the map is initiated, astronauts can intuitively explore the terrain by navigating the grid-based map using fingertip gestures.

  1. Map - Add POI (Point of Interest) Marker

Clicking the 'Add POI' button activates POI-adding mode, allowing astronauts to drop a POI marker at a desired location at map by tapping on the map.

  1. Map - Obstacle Marker

Given the specificity of this interface, the 'Obstacle' button activates Obstacle-adding mode, allowing astronauts to place a glowing orange marker to indicate locations that pose danger and require caution.

  1. Map - View Marker

When astronauts select a dropped POI marker, a pop-up menu appears above it with a 'View' button that transitions to a detailed page about the POI.

  1. Map - Navigation

On the detailed page, astronauts will find a 'Navigate' button in the bottom-right corner. Clicking it will activates navigate mode, drawing a white line to guide them from the POI to the Lander (UIA).

  1. Map - Add Image

The POI pop-up menu includes an 'Add Image' option. When astronauts select this button, the interface transitions to the Add Image screen, allowing them to capture a photo using Magic Leap's camera.

  1. Repair

The repair function is accessible through the hand menu. When repair mode is initiated, a task progress bar appears at the top, similar to the egress interface, outlining the steps astronauts need to follow for a successful repair.

  1. Geological Sampling - Add Image

The geological sampling interface is accessed through the palm menu, allowing astronauts to enter geological sampling mode. In this mode, astronauts begin by scanning the rock with an RFID scanner. Once scanned, the interface activates a camera view that automatically captures a photo upon detecting a rock.

  1. Geological Sampling - Rock data

After the picture is successfully captured and saved, the scanner transmits information about the scanned rock to the TSS. The TSS processes the data and sends detailed rock information back to the interface, which is then displayed to the astronaut.
While the interface shows thorough information about the rock, the astronauts can also manually add additional information

  1. Geological Sampling - Add Voice Note

While the interface shows thorough information about the rock, the astronauts can also manually add additional information using voice note.

  1. Geological Sampling - New Geo Sample

After successfully scanning a rock and adding a voice memo, astronauts can easily begin another geological sampling by clicking the ‘New Geo Sample’ button to reinitiate geological sampling mode and add the next sample to the database.

  1. LMCC

The LMCC oversees all astronaut operations and can intervene when necessary. It is designed for a separate user stationed at a dual-monitor setup throughout the mission. The proposed web browser integrates five key features to enhance task management, data accessibility, user interaction, telemetry monitoring, and safety protocols for both astronauts and the LMCC. The AR system continuously updates in real-time, reflecting LMCC's map edits and text signals to ensure seamless coordination.

Software Development

Our system architecture is organized to integrate multiple components seamlessly. The backend of the AR software is developed using C# scripting, while the LMCC is built with TypeScript and React. Data, such as video feeds, is transmitted to LMCC via HTTP requests. Both the Magic Leap 2 and the LMCC compute unit receive data from the TSS. The UI is designed in Figma and developed using Unity and MRTK3, ensuring consistency and functionality. Finally, all systems are packaged and deployed natively as an application on the Magic Leap 2.

Testing

1. Field + Usability Testing

Testing was conducted at Roger Williams Park in daylight and open terrain, involving both design and development team members.

Through this process, the team identified the need to unify the design across AR and LMCC, enhance the clarity and accessibility of UI elements such as icons, buttons, and layout, and improve the task hierarchy on each screen to ensure a seamless user experience.

On the development front, the team focused on enhancing hand-tracking input under challenging conditions, optimizing component placement based on distance, refining head-tracking animation, and addressing issues related to object collision.

  1. Test Week at NASA Johnson Space Center (JSC)

As challenge finalists, our RISD SUITS team traveled to Houston, TX, to present our work and conduct final product testing at the Johnson Space Center (JSC) facilities. I was one of the four team members selected for the on-site testing.

May 20 -

Testing & Evaluation 1

The initial on-site testing was conducted in the JSC rockyard and evaluated by a team member. The process was structured into three key components: briefing, testing, and debriefing.

Key findings: The extreme weather caused the Magic Leap hardware to overheat, rendering it unable to support our application.

May 22 -

Testing & Evaluation 2

On the second day of testing, we successfully addressed the overheating issue by using ice packs to cool the devices.

Through this testing process, we gained invaluable insights and constructive feedback on our software.

  • Telemetry Stream Steps: Provide a clear visualization of the telemetry stream status for better understanding.

  • Acronym Clarification: Add tooltips or explanations for acronyms to improve accessibility.

  • Color Accessibility: Replace orange hazard pins with accessible colors or visual alternatives.

  • Map Panel Toggle: Enable easy toggling of the map panel to improve navigation and obstacle awareness.

Exit Pitch

Watch our team exit pitch!

Learnings

Consideration in an AR environment
The interface differs spatially and visually between Figma and AR devices. Acknowledging this difference and the importance of visibility and readability in spatial design, I've learned to allocate ample time for testing prototypes on the target AR device to tackle unforeseen challenges effectively.

Communication Skills
Navigating Unity and Figma's differences, designers faced a 3D learning curve in Figma. As the sole designer-developer, I learned effective communication skills, fostering cross-disciplinary collaboration insights for seamless communication in this dynamic environment.

Understanding the design transition of 2D and 3D
As the sole individual capable of both design and development, I transitioned 2D components into the 3D engine without the MRTK Figma Bridge API, which didn't work as expected. In 2D, a simple click suffices, but in three-dimensional space, components involve multiple layers. I realized that systematic composition and dismantling of 2D components, contingent upon layer properties like color, icon, and interactivity, are essential. This methodical deconstruction is based on the characteristics of layers present in 3D space.

Design for High Cognitive Demand Environments
An important insight gained during the testing week at JSC was the significance of double confirmation in high-cognitive demand environments. Despite the time-sensitive nature of lunar exploration missions, incorporating double confirmation is vital to prevent unexpected termination of specific procedures or tasks, fostering user confidence.