Transformers and Large Language Models: A Hands-On Guide to RAG and Agentic AI First Edition

★★★★★ 4.5 116 reviews

US$24.00
Price when purchased online
Free shipping Free 30-day returns

Sold and shipped by plasterrepair.com.au
We aim to show you accurate product information. Manufacturers, suppliers and others provide what you see here.
US$24.00
Price when purchased online
Free shipping Free 30-day returns

How do you want your item?
You get 30 days free! Choose a plan at checkout.
Shipping
Arrives May 18
Free
Pickup
Check nearby
Delivery
Not available

Sold and shipped by plasterrepair.com.au
Free 30-day returns Details

Product details

Management number 220491335 Release Date 2026/05/03 List Price US$24.00 Model Number 220491335
Category

This book is a hands-on guide to understanding the foundations, architectures, and real-world applications of transformers and large language models in modern AI.The book begins by laying the foundations of generative AI architectures, tokenization, encoding, and classical modeling techniques. Initial chapters address the evolution from feed-forward networks and recurrent neural networks to long short-term memory (LSTM), setting the stage for the revolutionary transformer architecture. The core of the book focuses on transformers, introducing the encoder-decoder framework, attention mechanisms, positional encodings, and the internal workings of multi-head attention, normalization, and multi-layer perceptrons. Readers gain insight into advanced techniques such as rotary positional embeddings (RoPE), mixture of experts (MoE), and knowledge distillation, alongside practical training strategies like self-supervised learning, fine-tuning, and reinforcement learning with human feedback. Popular models from OpenAI, DeepSeek, and other vendors are examined to highlight the evolution of the LLM landscape. Building on these foundations, the text explores methods for model customization, including parameter-efficient fine-tuning (LoRA, adapters), text generation strategies, prompt engineering, and quantization. Retrieval-Augmented Generation (RAG) is introduced as a critical innovation for grounding LLMs in external knowledge, with detailed evaluation techniques for retrieval and generation. Finally, the book ventures into Agentic AI, demonstrating protocols like Model Context Protocol (MCP) and Agent-to-Agent (A2A) interactions with practical coding examples.In conclusion, this book serves as both a practical guide, equipping readers with the technical depth and applied strategies needed to design, fine-tune, and deploy cutting-edge transformers and large language models for real-world applications.What we will learn:Ø Understand the foundations of AI, ML pipelines, tokenization, encoding, and early neural architectures.Ø Explore transformers in depth—encoder-decoder design, attention mechanisms, and advanced embedding methods.Ø Learn modern LLM advancements like RoPE, MoE, SLMs, fine-tuning strategies, and evaluation techniques.Ø Master practical customization through prompt engineering, PEFT methods, quantization, and text generation.nWho this book is for:Data scientists, ML engineers, AI researchers, and developers exploring Transformers and large language models. Read more

ISBN13 979-8868827846
Edition First Edition
Language English
Publisher Apress
Item Weight 1.11 pounds
Publication date July 3, 2026

Correction of product information

If you notice any omissions or errors in the product information on this page, please use the correction request form below.

Correction Request Form

Customer ratings & reviews

4.5 out of 5
★★★★★
116 ratings | 48 reviews
How item rating is calculated
View all reviews
5 stars
83% (96)
4 stars
4% (5)
3 stars
2% (2)
2 stars
1% (1)
1 star
10% (12)
Sort by

There are currently no written reviews for this product.