Course Duration
1 Day

NVIDIA
Authorized Training

IT

Course cost:
£799.00

IT Certification Overview

Transformer architectures underpin modern natural language processing and large language models. We believe organisations that master AI, Cloud, and Data technologies gain a decisive advantage in building scalable, intelligent applications. This hands-on workshop equips learners with the skills to construct, fine-tune, and deploy Transformer-based deep learning models for real-world NLP tasks.

Over eight hours, participants will build a Transformer neural network in PyTorch, develop a named-entity recognition application using BERT, and deploy the solution with ONNX and NVIDIA TensorRT to an NVIDIA Triton Inference Server. By the end of the course, learners will be proficient in applying task-agnostic Transformer-based models to text classification, named-entity recognition, and question answering, and deploying them into production-ready inference environments.

Newto Training Reviews

What Our Happy Alumni Say About Us

Prerequisites

Participants should have:

  • Experience with Python coding, including working with library functions and parameters
  • A fundamental understanding of a deep learning framework such as PyTorch, TensorFlow, or Keras
  • A basic understanding of neural networks and core deep learning concepts

Suggested preparation materials may include introductory Python tutorials, overviews of deep learning frameworks, PyTorch fundamentals, and high-level deep learning concepts.

Target audience

This course is designed for:

  • Developers and engineers building NLP or AI-powered applications
  • Data scientists seeking to deepen their understanding of Transformer architectures
  • Technical professionals deploying deep learning models into production environments
  • Organisations looking to strengthen applied capability in AI inference and model deployment

Learning Objectives

By the end of this workshop, learners will be able to:

  • Explain how Transformer architectures function as foundational building blocks of modern large language models for NLP
  • Describe how self-supervision enhances Transformer-based models such as BERT and other LLM variants to deliver superior NLP results
  • Construct a Transformer neural network in PyTorch, including implementation of self-attention mechanisms
  • Leverage pretrained Transformer models to solve NLP tasks such as text classification, named-entity recognition, and question answering
  • Build and fine-tune a named-entity recognition application using BERT
  • Manage inference challenges associated with NLP workloads, including optimisation and deployment constraints
  • Prepare, optimise, and deploy refined models using ONNX, NVIDIA TensorRT, and NVIDIA Triton Inference Server

Building Transformer Based NLP Applications Course Content

Introduction and course setup

  • Meet the instructor and review course objectives
  • Set up access to the training environment
  • Overview of workshop structure, tools, and assessment approach

Introduction to Transformers
Explore how the Transformer architecture works in detail:

  • Core components of the Transformer model
  • Multi-head self-attention and positional encoding
  • Calculating and interpreting the self-attention matrix
  • Building the Transformer architecture in PyTorch
  • Using a pretrained Transformer model to translate English to German
  • Discussion of task-agnostic modelling approaches

Self-supervision, BERT, and beyond
Learn how self-supervised Transformer-based models are applied to concrete NLP tasks:

  • Understanding self-supervision and masked language modelling
  • How BERT and other LLM variants improve upon the base Transformer architecture
  • Introduction to NVIDIA NeMo for NLP workflows
  • Building a text classification project to classify abstracts
  • Developing a named-entity recognition project to identify disease names in text
  • Improving model accuracy using domain-specific pretrained models
  • Evaluating performance metrics and refining model outputs

Inference and deployment for NLP
Deploy and optimise NLP applications for live inference:

  • Preparing trained models for deployment
  • Exporting models to ONNX format
  • Optimising models using NVIDIA TensorRT for accelerated inference
  • Deploying models to an NVIDIA Triton Inference Server
  • Testing, validating, and monitoring deployed NLP services
  • Managing inference challenges in production environments

Final review and assessment

  • Consolidation of key learnings across architecture, training, and deployment
  • Completion of skills-based coding assessments
  • Multiple-choice knowledge checks covering NLP concepts and model design
  • Guidance on setting up a local development environment
  • Exploration of additional resources and next steps

Exams and assessments

Assessment consists of:

  • Skills-based coding assessments that evaluate the ability to build an NLP task, including creating a neural module pipeline and training a model
  • Multiple-choice questions assessing understanding of NLP concepts and Transformer-based architectures

Upon successful completion, participants will receive an NVIDIA DLI certificate recognising subject matter competency and supporting professional career growth.

Hands-on learning

This workshop is highly practical and application-focused:

  • Guided implementation of a Transformer architecture in PyTorch
  • Development of real NLP applications, including text classification and named-entity recognition
  • Model optimisation using ONNX and NVIDIA TensorRT
  • Live deployment to an NVIDIA Triton Inference Server
  • Dedicated access to a fully configured, GPU-accelerated cloud environment for the duration of the workshop

Hardware and delivery information

Participants require a desktop or laptop capable of running the latest version of Chrome or Firefox. Each learner is provided with dedicated access to a GPU-accelerated server in the cloud.

The course is delivered in English and Simplified Chinese.

Duration: 8 hours

Upcoming Dates

Dates and locations are available on request. Please contact us for the latest schedule.

Advance Your Career with Building Transformer Based NLP Applications

Gain the skills you need to succeed. Enrol in Building Transformer Based NLP Applications with Newto Training today.