-
Notifications
You must be signed in to change notification settings - Fork 1
Open
Labels
Description
Reimplement a movement generation neural net that can run in real-time (hopefully) for the bot to be able to use non-verbal communication as well as navigate the world.
Making physical movement more natural, and more intelligent, is one the long term goals of MetaGen, for which we are gathering the "ImageNet of Human Behaviour", but we can begin implementing models that allow for limited functionality like:
- simple speech-driven non-verbal communication (like done for StyleGestures)
- simple trajectory-driven full-body movement (like done for MoGlow, or AIST++ paper)
The NN models that I'm considering are
- MoGlow (can run real time)
- AIST++ cross-modal transformer (I'm not sure yet if it can run in real time, but may produce better motion)