Using artificial intelligence to assess hand trajectories of a touchscreen shape-tracing task

Abstract

Artificial intelligence-based motion capture systems offer a markerless, lightweight approach to kinematic analyses in a variety of human motor learning investigations. Yet, many of these tools are unvalidated and optimal model parameters remain undefined. Here, we aimed to determine optimal model parameters of DeepLabCut (DLC) technology for assessing upper limb movements. We systematically tested the impact of two model parameters on accuracy of the trained DLC model via two experiments. Participants (N = 29) performed 3 blocks of a touchscreen-based shape-tracing task using their index finger. Each block was captured by a camera (GoPro Hero8, 60Hz, 87 total videos). Model accuracy was assessed as pixel error, determined following training (500,000 iterations, 20 frames labelled per video) at the DLC validation stage. Accuracy was compared across: Exp 1) 1-3 virtual markers labelled per frame, and Exp 2) 5 vs. 10 videos used to train the model. Results: Exp 1) Increasing virtual markers from 1 to 2 reduced test error, yet adding a third resulted in no further improvements (235.95px vs. 7.52px vs. 28.37px). Exp 2) Increasing the number of videos used to train the model reduced test error (235.95px vs. 8.19px). Our findings indicate 2 labels and 10 videos (>10% dataset) produced a superior DLC model for capturing the shape-tracing movement. Further research will be conducted to test this model against a validated system (e.g., coordinates of the finger output by DLC vs. those obtained from the touchscreen). This work informs applications of DLC in investigations of human movement.

Acknowledgments: This work is supported by funding awarded to SK through the Natural Sciences and Engineering Research Council (NSERC; Discovery Grant).