Back to AI Timeline
2017Research

Transformer Architecture Paper

Google researchers published 'Attention Is All You Need,' introducing the Transformer architecture that replaced recurrence with self-attention mechanisms. Transformers enabled massively parallel training and captured long-range dependencies in text far more effectively than previous approaches. This paper became the foundation for virtually every major language model that followed.

In June 2017, a team of eight researchers at Google published "Attention Is All You Need," a paper that introduced the Transformer architecture. While the paper's initial focus was on machine translation, the Transformer would go on to become the foundational architecture for virtually every major AI model that followed -- from BERT and GPT to DALL-E and beyond. It is arguably the most influential machine learning paper of the decade.

The Problem with Recurrence

Before the Transformer, the dominant approach for processing sequential data (like text) was recurrent neural networks (RNNs) and their variants, particularly Long Short-Term Memory (LSTM) networks. These architectures processed input one token at a time, maintaining a hidden state that theoretically captured information about previous tokens. However, RNNs had two major limitations: they were difficult to parallelize during training (since each step depended on the previous one), and they struggled to capture long-range dependencies despite mechanisms like LSTM gates.

The Self-Attention Mechanism

The Transformer's key innovation was the self-attention mechanism (also called scaled dot-product attention). Instead of processing tokens sequentially, self-attention allows every token in a sequence to directly attend to every other token, computing relevance scores between all pairs. This means a word at the beginning of a long document can directly influence the representation of a word at the end, without information having to pass through every intermediate step.

Multi-Head Attention

The paper introduced multi-head attention, which runs several attention mechanisms in parallel, each learning to focus on different types of relationships. One head might capture syntactic relationships, another semantic similarities, and another positional patterns. The outputs are concatenated and projected, giving the model a rich, multi-faceted representation of the input.

The Architecture

The full Transformer uses an encoder-decoder structure. The encoder processes the input sequence through multiple layers of self-attention and feed-forward networks. The decoder generates the output sequence, attending both to its own previous outputs and to the encoder's representations. Both encoder and decoder use residual connections, layer normalization, and positional encodings (since the architecture has no inherent notion of sequence order).

The Team

The paper was authored by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan Gomez, Lukasz Kaiser, and Illia Polosukhin. Several of these researchers went on to found significant AI companies: Aidan Gomez co-founded Cohere, Noam Shazeer co-founded Character.AI (and later returned to Google), and Illia Polosukhin co-founded the NEAR Protocol.

Immediate Results

The paper demonstrated state-of-the-art translation quality on English-to-German and English-to-French benchmarks while training significantly faster than previous architectures. The ability to parallelize training across GPUs meant that Transformers could be trained on much larger datasets and with many more parameters than RNN-based models.

The Cascade Effect

The Transformer architecture quickly proved to be far more general than its creators anticipated. BERT (2018) used the encoder for language understanding. GPT (2018) used the decoder for language generation. Vision Transformers adapted the architecture for image recognition. The same basic framework was applied to protein folding, music generation, code completion, and dozens of other domains. The paper's title, "Attention Is All You Need," turned out to be remarkably prescient.

Key Figures

Ashish VaswaniNoam ShazeerNiki ParmarJakob UszkoreitLlion JonesAidan GomezLukasz KaiserIllia Polosukhin

Lasting Impact

The Transformer architecture became the foundation for virtually every major AI model that followed, enabling the scaling revolution that produced GPT, BERT, and their successors. It was the single most important architectural innovation in modern AI.

Related Events

2018Model
BERT by Google

Google released BERT (Bidirectional Encoder Representations from Transformers), a pre-trained language model that achieved state-of-the-art results across eleven NLP benchmarks. BERT's bidirectional training approach allowed it to understand context from both directions in a sentence. It was quickly integrated into Google Search, improving understanding of one in ten English queries.

2018Model
GPT-1 by OpenAI

OpenAI released GPT-1 (Generative Pre-trained Transformer), demonstrating that unsupervised pre-training on large text corpora followed by supervised fine-tuning could produce strong NLP results. With 117 million parameters, it was modest by later standards but proved the viability of the generative pre-training approach. GPT-1 set the stage for the scaling revolution that followed.

2006Research
Geoffrey Hinton's Deep Learning Breakthrough

Geoffrey Hinton and collaborators published influential work on training deep belief networks, reigniting interest in neural networks after years of stagnation. Their techniques for layer-wise pre-training made it feasible to train networks with many layers. This breakthrough is widely credited with launching the modern deep learning revolution.