You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I hope this message finds you well. I am reaching out to express my profound admiration for the innovative work you have presented in "Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold". The ingenuity of your approach has sparked a keen interest in the potential applications and extensions of your research.
Having perused your documentation and experimented with the provided codebase, I am intrigued by the prospect of extending the DragGAN framework to support a wider array of GAN architectures beyond StyleGAN2 and StyleGAN3. My query pertains to the feasibility and potential methodologies for adapting your interactive point-based manipulation technique to other generative models, such as BigGAN or VQ-VAE-2, which exhibit distinct latent space characteristics.
I am particularly interested in understanding the following aspects:
The adaptability of the DragGAN algorithm to GAN architectures with different latent space dimensions and configurations.
The necessary modifications to the current framework to accommodate the manifold properties of alternative GAN models.
The impact of such adaptations on the fidelity and controllability of the interactive manipulation features.
I believe that broadening the compatibility of DragGAN could significantly enhance its utility for creative and research endeavours alike. I would be most grateful if you could share your insights on this matter or direct me towards any ongoing efforts or considerations in this domain.
Thank you for your time and for sharing your groundbreaking work with the community. I eagerly await your response and any guidance you may offer.
Best regards,
yihong1120
The text was updated successfully, but these errors were encountered:
Dear Contributors,
I hope this message finds you well. I am reaching out to express my profound admiration for the innovative work you have presented in "Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold". The ingenuity of your approach has sparked a keen interest in the potential applications and extensions of your research.
Having perused your documentation and experimented with the provided codebase, I am intrigued by the prospect of extending the DragGAN framework to support a wider array of GAN architectures beyond StyleGAN2 and StyleGAN3. My query pertains to the feasibility and potential methodologies for adapting your interactive point-based manipulation technique to other generative models, such as BigGAN or VQ-VAE-2, which exhibit distinct latent space characteristics.
I am particularly interested in understanding the following aspects:
I believe that broadening the compatibility of DragGAN could significantly enhance its utility for creative and research endeavours alike. I would be most grateful if you could share your insights on this matter or direct me towards any ongoing efforts or considerations in this domain.
Thank you for your time and for sharing your groundbreaking work with the community. I eagerly await your response and any guidance you may offer.
Best regards,
yihong1120
The text was updated successfully, but these errors were encountered: