Skip to content

Variational Autoencoder (VAE) with LSTM layers to generate short dance phrases. Compare input sequences from a test set with their decoded counterparts using the VAE model. Generate new dance sequences using the trained model and visualize them alongside real sequences for evaluation.

Notifications You must be signed in to change notification settings

CoderSahel/AI-Enabled-Choreography

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AI-Enabled-Choreography

  1. Utilize motion capture data in .npy format with shape (# joints, # timesteps, # dimensions) to visualize dance sequences.
  2. Construct a 3D plotting function to depict dancer movements over time, with joints represented as points.
  3. Train a generative model like a Variational Autoencoder (VAE) with LSTM layers to generate short dance phrases.
  4. Compare input sequences from a test set with their decoded counterparts using the VAE model.
  5. Generate new dance sequences using the trained model and visualize them alongside real sequences for evaluation.

About

Variational Autoencoder (VAE) with LSTM layers to generate short dance phrases. Compare input sequences from a test set with their decoded counterparts using the VAE model. Generate new dance sequences using the trained model and visualize them alongside real sequences for evaluation.

Topics

Resources

Stars

Watchers

Forks