Move.ai

Details

UI / UX / Prototyping • 2022-2024

mah

Introduction

Move AI is markerless motion capture company that makes it easy to bring realistic human motion to animated characters by turning 2D video into 3D motion data with proprietary technology that uses advanced AI, computer vision, biomechanics and physics. In short, this means they're likely the future of motion capture. As an avid gamer and VR/AR enthusiast, I feel like I've just gotten an incredible opportunity to hop on this exciting train!

Challenge

What Move.ai had in mind was to bring their innovative technology to as many users as possible through easy-to-use apps, providing access to high-quality motion capture data without the need to spend tens or even hundreds of thousands of dollars on hardware or rent mocap studios. Additionally, this app and technology eliminate the need for special suits, which can take nearly 45 minutes to put on, and the need for skilled technicians on site. One of the best parts is that you can set up your studio wherever you want—whether it's at home, in your backyard, or even at the Palais de Chaillot with a view of the Eiffel Tower.


My job, and a great challange, was to help bring the company's vision to life. I had helped to build these apps and created a design system with a uniform look and feel. The design system had to be easily scalable across different platforms. The most challenging part was simplifying the complex processes and technical knowledge required to prepare and record motion capture sessions, transforming them into an accessible mobile app experience.

First month, first hackathon and first tests

I joined Move.ai in June as a sole Product Designer, right when the team had the core tech ready and started working on the iOS app. My first few weeks were all about getting up to speed and brainstorming ideas for the app’s features, design, and overall vibe.

Then we had hackathon in Portugal, where everyone from around the world got together. We started working on our first iOS app - Move Multi-Cam, focusing on building the app's first prototype and testing out different features. During our testing for e.g. we realized how quickly iPhones can overheat when used as cameras under the full mediterranean sun, when we set them up on a tennis court to do some test captures. This taught us that we needed to create some alerts for users to warn them about issues like overheating and smilar issues before they happen.

At the end of the hackathon, we had a really solid prototype of the iOS app and gained a lot of new insights into the features we needed to focus on. This experience helped us understand what is required to deliver a great user experience for our users.

ma-1
ma-22

Design, refine and implement

After the hackathon, armed with the prototype and all the insights we gathered, we began working on the proper version 1.0 of the app.

The plan for the Move Multi-Cam app was quite ambitious, as we needed to simplify the complex process of motion capture into a professional user-friendly app. The main feature of the app was the ability to transform any iOS device, from iPhone 8 and above, into either a host device or a camera device. The host device would be the main control unit, used to start and stop recordings, name scenes, and manage the entire capture and uploading process. The camera devices would serve as one of up to six possible recording devices. After recording, all captured materials must be transferred to a cloud app for processing the recorded videos and extracting 3D animation data. In the cloud app, animators or managers can review the content and download the animation files. With all the initial requirements and needed functionalities defined, I got to work.

 

The app was built with professional users in mind, like 3D animators and game developers. With that in mind, I designed the interface to feel similar to apps like iMovie or other video editing and recording tools for professionals and semi-professionals on iOS.

The color scheme was pulled from the cloud app Move.ai already used with their clients, keeping everything visually consistent since the iOS app is the starting point in a process that finishes in the cloud.

For icons, I suggested using SF Symbols, Apple’s built-in icon system. This choice came down to two main reasons. First, when we tested our brand font against Apple’s SF Font, it became clear that while the brand font was great for marketing and the website, it wasn’t the best fit for small device screens. So, we chose SF Font for the app, as it is designed to work seamlessly with the icons. Second, SF Symbols included all the iOS-specific icons we needed, like device icons, battery indicators, and connectivity symbols, which made it an easy and practical choice.

 

The tool I used for design was Figma, and for developer hand-off, I used Zeplin. Some might ask, why Zeplin when Figma has Dev Mode? While Dev Mode in Figma is great, in my experience, Zeplin has one advantage in certain cases. Zeplin shows only one screen at a time, which can be much easier for developers who aren't as familiar with the tools designers use and working with large artboards. For those developers, Zeplin helps onboard them more quickly and get them up to speed faster.

As for the design and implementation process that I worked out with the team for this project: Initially, I collaborated with my product manager to define the functionality scope after they gathered the initial requirements from stakeholders. Once we developed the initial user stories, we met with the iOS developers to analyze whether our assumptions were feasible and if there were any potential obstacles. After this initial consultation, I began designing. Since Move.ai was an early-stage startup, to speed up the work on the functionality, I agreed with the product manager to start by designing screens of medium to high fidelity right away. After preparing the initial designs along with the user flows, I reviewed the entire project with the product manager, and we created a list of changes. This iteration process usually repeated 1-3 times, depending on the project.

Once I had the full scope of a given functionality, we arranged a meeting with the entire product team, where I presented the solution, and then we verified with the developers whether all elements were technically feasible and if they had any concerns. Over time, this stage evolved as our team grew. I then prepared an clickthrough prototype for everyone to review before the meeting, and during the meeting, we focused on analysis, questions, and technical verification.

Next, the user stories were typically developed in Jira, and I often participated in these meetings to answer any questions about how something should be implemented or if there were animations for certain elements. After that, we moved on to the implementation phase. Again, the process changed over time. Initially, I would receive a new build of the app for verification every now and then, but as time went on, it became harder to visually spot some changes, or certain events had to occur for them to be noticeable. So, I was given access to GitHub. From that point on, I started verifying the implementation inside Xcode, running the relevant screens on different simulators, building the app on my device, and reviewing the code. Similarly, the way feedback was given changed — from writing comments in Jira and attaching visuals, to commenting directly on GitHub commits or, for simpler changes, making my own commits, which the development team would verify and accept if everything was fine. This saved a lot of time in some cases. Finally, our QA team came in to do the final verification, and as is typical in projects, we moved on to the next functionality, and the process repeated.

Lots of details and edge cases

All screens were designed with consideration for all base iPhone screen sizes, as well as a unified approach for all iPads. This was posible because, on iPads, the primary focus is on defining scaling and layout constraints that can adapt to screen size, much like responsive web design. The screen designs accounted for whether the user had an iPhone with Face ID or a Home button, adjusting certain padding to align the user interface with the specific devices the user owns for better user experience.

Because of the app's complexity, it also includes a lot of small details that, in many cases, serve as indicators to convey different information to users. For example, the app displays battery status with color changes for all connected camera devices. It features a robust notification system that informs the user about device temperature, connection status, available space, and recording status. Additionally, it uses icons to illustrate the exact device type being used and its assigned ID in the recording session. The app also provides data transfer status and shows where the recordings are located.

ma-3
ma-4

Move One, gateway to mainstream

While still working on Move Multi-cam app company R&D was constantly worked on making core technology better. At one point new idea arise "Could we use only one camera to record and extract from it 3D motion data?". After couple of weeks we knew we can do it and from that point forward we started working on company next app - Move One.

Move One was designed to be a more approachable and user-friendly app compared to Move Multi-Cam. Its interface was intended to resemble apps like Snapchat and Instagram rather than professional tools. Since it was meant to be a more mainstream app, we chose a color scheme aligned with our brand colors rather than the professional tones used in our other apps. Since the main goal of the app was to reach a broader audience, I paid special attention to designing the best possible onboarding experience and refining the interface’s intuitiveness. Additionally, certain features were designed to assist users during recording or help them maintain the correct posture during calibration process.

To develop this app, we followed the same design and implementation process as with our first app. However, this time, we actively involved our Discord community by inviting them to beta test the app. The beta testing period lasted about six months, during which we expanded the pool of testers four times to gather fresh insights with each new wave.

This approach was incredibly helpful for collecting user feedback, as the specific way the app was used made traditional user testing in a controlled environment challenging. In this case, analytical data was mainly useful for making straightforward decisions, like fixing specific bugs or addressing gaps in the user journey.

Summary

Working on these two iOS apps for Move.ai brought me a great deal of professional satisfaction and joy. As a gamer and a technology enthusiast, it was incredibly rewarding to contribute to what I believe is the future of the motion capture industry. Additionally, it gave me the opportunity to learn about and witness firsthand some production aspects in industries where 3D animation plays one of the a key roles in their business.

© 2024 Maciej Jurczak