Skip to content

Blend shapes (ARKit + Oculus visemes) and dynamic bones #302

@met4citizen

Description

@met4citizen

Hi. I am working on the TalkingHead project, which provides a browser-based JavaScript class for real-time lip sync using 3D full-body avatars. Demo videos and use cases are available in the project README.

I'm new to MakeHuman/MPFB, but I created a character using MPFB and exported it from Blender for use with TalkingHead. After some post-processing and minor adjustments (renaming some objects/bones, adding LeftEye/RightEye bones, cleaning up materials for GLB/Three.js/WebGL, etc.), the character worked pretty well in the TalkingHead test web app.

The most time-consuming step was creating the facial blend shapes required for lip sync and facial expressions. Using the Faceit Blender add-on, this process took only about 15 minutes, but relying on a commercial add-on is not ideal, as it would require developers and/or end users to purchase additional software.

This leads me to wonder whether you might consider adding native support for ARKit and Oculus visemes blend shapes in some future releases?

This feature request is likely related to #169. - As I understand it, it is possible to create morph targets (aka blend shapes) in MPFB, but would these need to be created separately for each base mesh / facial topology and separately for each available asset (such as teeth, tongue, eyebrows, eyelashes)?

For what it is worth, creating blend shapes with Faceit does not require much artistic skill. The process mainly involves registering the relevant objects and placing landmarks. Based on this information, the add-on creates a facial rig and automatically generates the blend shapes using it. If reusable targets would indeed be the right way to implement this in MPFB, Faceit add-on might be useful in assisting with the initial creation.

In addition to blend shapes, I was also wondering whether assets can include extra bones and weights? - The reason for this question is that, in addition to text-driven and audio-driven lip sync, the TalkingHead class includes a lightweight physics engine that enables real-time physics simulation for additional "dynamic" bones, such as hair bones.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions