Georgia Tech Team Helps Virtual humans Learn To Get Dressed

“Animated characters can mimic human behavior extremely well, but there’s one trick that digital denizens haven’t quite yet mastered: Getting dressed.
Research from the Georgia Institute of Technology has produced a systematic tool that allows animators to create realistic motion for virtual humans who are getting dressed. The new algorithm enables virtual characters to intelligently manipulate simulated cloth to achieve the task of dressing with different dressing styles for various types of garment and fabric. The research team’s long-term goal is to develop assistive technologies that would enable robots of the future to help disabled or elderly adults with self care, such as getting dressed. Researchers Karen Liu and Greg Turk are co-authors on the paper, “Animating Human Dressing,” at SIGGRAPH 2015, the ACM conference on Computer Graphics and Interactive Techniques, Aug. 9-13 in Los Angeles. ”

Share