Just completed some modifications to the ManuelBastioni YASP component to add some smoothing to the animation. I think it produces acceptable results for the first go. Most likely, I'll need to jump in and tweak later. But it does save a tone of time compared to if you had to go in manually and find where each phoneme is, etc. If nothing else, the mark pass of the add-on, where it marks the location of all the phonemes, provides value.

I think I got to a phase with the Automatic Lip Sync Project where any more effort I put in tweaking it would be beyond the point of diminishing returns. The only other thing worth putting effort in is making this a separate add-on and porting it to windows. I can see how this can be useful when working with makehuman characters. But I'll cross that bridge when I get to it.

The next step for me is to start using the tools I've created/modified to make a short scene. This will be key in working out the kinks from my workflow.

Here is a video I made playing around with lip-sync. The main purpose here is to show how the lip-sync and the facial animation can be combined. Note, this was recorded in real time, so it has pretty low fps.