16

Lessons

5 days

Duration

English

Language

OBJECTIVE:

Course features:

PRE-REQUISITES:

LAB SETUP REQUIREMENTS:

Learning Path

  • Introduction to generative artificial intelligence and its applications
  • Types of generative AI models: generative adversarial networks (GANs), variational autoencoders (VAEs), and autoregressive models
  • Understanding the difference between generative and discriminative models
  • Probability distributions and their role in generative modeling
  • Maximum likelihood estimation (MLE) and maximum a posteriori estimation (MAP)
  • Sampling techniques for generating data samples from probability distributions
  • Overview of generative adversarial networks (GANs) architecture
  • Training procedure: generator and discriminator networks
  • Applications of GANs in image generation, style transfer, and data augmentation
  • Participants implement a basic GAN model using TensorFlow or PyTorch
  • Training the GAN on a simple dataset and generating new synthetic samples
  • Experimenting with hyperparameters and architecture modifications
  • Introduction to variational autoencoders (VAEs) and their architecture
  • Objective function: reconstruction loss and KL divergence
  • Applications of VAEs in image generation, anomaly detection, and dimensionality reduction
  • Overview of autoencoder architecture and training procedure
  • Denoising autoencoders, sparse autoencoders, and convolutional autoencoders
  • Hands-on exercise: implementing and training an autoencoder model
  • Introduction to adversarial autoencoders (AAEs) and their architecture
  • Combining elements of GANs and VAEs for improved generative modeling
  • Applications of AAEs in image generation and unsupervised representation learning
  • Participants implement and train a variational autoencoder (VAE) model using TensorFlow or PyTorch
  • Generating new images and analyzing latent space representations
  • Fine-tuning VAE hyperparameters and architecture for improved performance
  • Overview of text generation techniques and challenges
  • Markov models, recurrent neural networks (RNNs), and transformers for text generation
  • Applications of text generation in language modeling, chatbots, and content creation
  • Introduction to recurrent neural networks (RNNs) architecture
  • Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) cells
  • Training RNNs for text generation tasks
  • Introduction to transformer architecture and self-attention mechanism
  • Overview of transformer-based models such as GPT (Generative Pre-trained Transformer) and BERT (Bidirectional Encoder Representations from Transformers)
  • Fine-tuning pre-trained transformer models for text generation tasks
  • Participants implement and train RNN and transformer-based models for text generation tasks using TensorFlow or PyTorch
  • Generating new text samples and evaluating model performance
  • Experimenting with different architectures and training strategies for text generation
  • Overview of image generation techniques and challenges
  • PixelCNN, auto-regressive models, and conditional image generation
  • Applications of image generation in computer vision, art generation, and image synthesis
  • Introduction to conditional generative models such as conditional GANs and conditional VAEs
  • Conditioning on class labels, attributes, or text descriptions for controlled image generation
  • Hands-on exercise: implementing conditional generative models for image generation
  • Overview of style transfer techniques using neural networks
  • Gatys et al.’s neural style transfer algorithm
  • Image-to-image translation with conditional GANs (cGANs) and pix2pix models

Participants implement and train conditional generative models for image generation tasks using TensorFlow or PyTorch

  • Experimenting with different conditioning strategies and loss functions
  • Applying style transfer techniques to transform images with different artistic styles
  • Overview of advanced generative models such as flow-based models and energy-based models
  • RealNVP (Real Non-Volume Preserving) and Glow (Generative Flow with Invertible 1×1 Convolutions) architectures
  • Applications of flow-based models in density estimation and image generation
  • Brainstorming project ideas and use cases for generative AI applications
  • Defining project goals, requirements, and deliverables
  • Forming project teams and assigning roles and responsibilities
  • Participants work on individual or group projects implementing generative AI models for specific tasks or applications
  • Guidance and support provided by instructors for project implementation and troubleshooting
  • Participants present their projects to the class and instructors
  • Projects are evaluated based on creativity, technical complexity, and practical applicability
  • Feedback provided to participants for further improvement and learning