Data Systems ArchitectureData Systems IntegrationExploration AnalysisMachine LearningScriptingML Optimisation
Job Roles
Computational ScientistData EngineerAI EngineerChief Data OfficerData StrategistData ArchitectICT&SS Professional
Overview
Gain essential theoretical knowledge and hands-on skills to develop, optimise, and deploy Large Language Models (LLMs) in enterprise settings. Gain a competitive edge in today's AI-driven job market by mastering one of its most coveted skills. Reduce dependence on external vendors like AWS and GCP and ensure your LLM deployments are optimised, scalable, and ethical. Fine-tune real-world use cases using leading models such as GPT-4 Turbo from OpenAI and Claude from Anthropic or explore open-source options like Llama 2 from Meta and Hugging Face.
Key Takeaways
At the end of this programme, you will be able to:
Understand foundational concepts of Large Language Models (LLMs), transformer architecture, and their evolution
Learn and apply strategies to fine-tune pre-trained LLMs with available foundational models for specific enterprise tasks
Enhance LLMs using reward-based reinforcement learning for performance optimisation
Overcome deployment challenges in production environments and optimise LLMs for efficient training and inference with model and data parallelism
Apply theoretical knowledge through practical labs, building and deploying LLMs in real-world scenarios
Who Should Attend
Please refer to the job roles section.
Information technology professionals who are planning to build their enterprise LLMs or fine-tune LLMs with foundational models.
CTOs and technical leaders, data engineers, data scientists, ML engineers, and software developers advancing in Large Language Models (LLMs) for fine-tuning, deployment, and training.
Prerequisites
This is an intensive intermediate course.
Participants should have intermediate mathematics and statistics knowledge, e.g. calculating boolean algebra (logic), and probability.
Participants should have intermediate computer literacy and software engineering fundamentals, e.g. using Windows or Linux or MacOS, Microsoft Office or LibreOffice, VMware or VirtualBox, and aware of web application, and client-server software architecture.
Participants should have current or prior hands-on coding experience in one or more high-level computer programming languages, preferable in Java. Experiences with Python, R, or structured query language (SQL) would have added advantages.
Participants without programming experience should self-study basic Java or Python.
Knowledge of deploying applications on the cloud such as AWS and GCP are a plus.
What To Bring
No printed copies of programme materials are issued. You must bring your internet-enabled computing device (laptop, tablet etc) with power charger to access and download programme materials. If you are bringing a laptop, please see below for the tech specs:
Minimum
Recommended
Computer and Processor
1.6 GHz or faster, 2-core Intel Core i3 or equivalent, e.g. Apple (Intel) year 2012 model and newer
Intel Core i7 or equivalent, e.g. Apple (Intel/M1/M2 chip) new models
Memory
4 GB RAM
16 GB RAM
Hard Disk
256 GB disk size
1 TB disk size
Display
800 x 600 screen resolution
1280 x 768 screen resolution
Graphics
Graphics hardware acceleration requires DirectX 9 or later, with WDDM 2.0 or higher for Windows 10 (or WDDM 1.3 or higher for Windows 10 Fall Creators Update).
DirectX 10 graphics card for graphics hardware acceleration
Others
An internet connection - broadband wired or wireless Speakers and a microphone - built-in or USB plug-in or wireless Bluetooth A webcam or HD webcam - built-in or USB plug-in
This programme will cover the following topics:
Day 1:
Introduction to Large Language Models (LLMs)
LLM Pre-Training and Scaling Laws
Building with a Foundational Model with Langchain
Day 2:
Fine-tuning LLMs with Instruction
Parametric Effective Fine-Tuning (PEFT)
Fine-tuning a Generative AI model for Dialogue Summarisation – Lab Hands On
Day 3:
Reinforcement Learning in LLMs with Human Feedback, Reward Hacking and Scaling
Applying Reinforcement Learning into LLM – Lab Hands On
Day 4:
Building your LLM and Choice of Architecture
Planning the LLM Model Pre-Training
Gathering, Selection & Pre-Processing Dataset for LLM
Tokenisation
Hyper-Parameter Tuning
Day 5:
Evaluation and Finetuning of Pre-trained LLMs
Scaling of LLMs
Responsible AI, LLMs Reasoning and Planning with a Chain of Thought
In-class Project Review
Full Fee
Full programme fee
S$4750
9% GST on nett programme fee
S$427.50
Total nett programme fee payable, including GST
S$5177.50
NOTE
Funding is available for this programme. Please visit the Learning Partner’s website to find out about the updated programme fee funding breakdown and eligibility.
Payment for this programme is to NUS-ISS, National University of Singapore.
Apply through your organisation's training request system.
Step 2
Your organisation's training request system (or relevant HR staff) confirms your organisation's approval for you to take the programme.
Your organisation will send registration information to the academy.
Organisation HR L&D or equivalent staff can click here to register through Learning Partner's registration portal.
The HR L&D will need to generate a URL link and send it to the participant to register for the programme under Corporate-Sponsored. The participant must first log in to L3AP using Singpass before clicking on the URL link to complete their registration and declaration. Failure to do so will result in registration under Self-Sponsored.