2 days





Course features:


Lab Requirements:

Learning Path

  • Introduction to generative artificial intelligence and its applications
  • Types of generative AI models: generative adversarial networks (GANs), variational autoencoders (VAEs), and autoregressive models
  • Understanding the difference between generative and discriminative models
  • Probability distributions and their role in generative modeling
  • Maximum likelihood estimation (MLE) and maximum a posteriori estimation (MAP)
  • Sampling techniques for generating data samples from probability distributions
  • Overview of generative adversarial networks (GANs) architecture
  • Training procedure: generator and discriminator networks
  • Applications of GANs in image generation, style transfer, and data augmentation
  • Participants implement a basic GAN model using TensorFlow or PyTorch
  • Training the GAN on a simple dataset and generating new synthetic samples
  • Experimenting with hyperparameters and architecture modifications
  • Introduction to variational autoencoders (VAEs) and their architecture
  • Objective function: reconstruction loss and KL divergence
  • Applications of VAEs in image generation, anomaly detection, and dimensionality reduction
  • Overview of autoencoder architecture and training procedure
  • Denoising autoencoders, sparse autoencoders, and convolutional autoencoders
  • Hands-on exercise: implementing and training an autoencoder model
  • Introduction to adversarial autoencoders (AAEs) and their architecture
  • Combining elements of GANs and VAEs for improved generative modeling
  • Applications of AAEs in image generation and unsupervised representation learning
  • Participants implement and train a variational autoencoder (VAE) model using TensorFlow or PyTorch
  • Generating new images and analyzing latent space representations
  • Fine-tuning VAE hyperparameters and architecture for improved performance