How the Internet Actually Works - Explained Simply
How the Internet Actually Works - Explained Simply
You type "google.com" into your browser and hit Enter. Half a second later, the page appears. It feels instant, almost magical.
But in that half second, your computer just performed one of the most complex operations in modern technology - involving undersea cables, satellites, server farms, and protocols designed in the 1970s.
Let's break it down in plain English.
Step 1: You Type a URL
When you type google.com into your browser's address bar, you're providing a domain name - a human-readable label for a computer somewhere on the internet.
But computers don't understand "google.com." They understand numbers. Specifically, they understand IP addresses like 142.250.80.46.
So the first thing your browser needs to do is translate that friendly name into a number. This is where DNS comes in.
Step 2: DNS - The Internet's Phone Book
DNS stands for Domain Name System, and it works exactly like a phone book. You know the name, and DNS gives you the number.
Here's how the lookup works:
- 1Your browser checks its local cache - "Have I looked this up recently?"
- 2If not, it asks your operating system's cache
- 3If that's empty too, it asks your router
- 4The router asks your ISP's DNS server
- 5If the ISP doesn't know, it asks a root DNS server - one of 13 server clusters that form the backbone of the internet's naming system
- 6The root server points to the top-level domain server (.com, .org, .net, etc.)
- 7The TLD server points to Google's authoritative DNS server
- 8Google's DNS server finally returns the IP address
This entire chain usually takes under 50 milliseconds. Once your browser has the IP address, it's cached so you don't have to do it again for a while.
Step 3: Making the Connection (TCP Handshake)
Now your browser knows the IP address, but it can't just start shouting data at Google's servers. It needs to establish a formal connection first.
This is done with a TCP three-way handshake:
- 1SYN - Your computer sends a "Hey, I want to talk" message
- 2SYN-ACK - Google's server responds "Sure, I'm ready"
- 3ACK - Your computer confirms "Great, let's go"
This handshake ensures both sides are ready to communicate and establishes rules for the conversation - like speaking order, packet size, and error handling.
Step 4: HTTPS and Encryption (TLS)
Before any real data is exchanged, your browser and the server perform another handshake - this time for security.
If the URL starts with https:// (which is almost everything these days), the connection uses TLS (Transport Layer Security) to encrypt everything.
Here's the simplified version:
- 1Your browser asks the server for its SSL/TLS certificate
- 2The server sends the certificate, which contains a public key
- 3Your browser verifies the certificate is legitimate (signed by a trusted authority)
- 4Both sides use the public key to agree on a shared secret key
- 5From this point, all data is encrypted with that shared key
This is why nobody between you and Google - not your ISP, not the coffee shop WiFi, not a hacker - can read what you're sending or receiving.
Step 5: The HTTP Request
Now that you have a secure connection, your browser sends an HTTP request. It looks something like this:
GET / HTTP/2
Host: google.com
Accept: text/html
User-Agent: Chrome/120
This tells Google's server: "Send me the homepage, I'm using Chrome, and I want HTML."
The server processes this request - which might involve:
- •Checking if you're logged in
- •Personalizing the page for your region
- •Loading data from databases
- •Running server-side code
Then it sends back an HTTP response with the HTML content of the page.
Step 6: Packets and Routing
Here's where it gets wild. The data doesn't travel as one big chunk. It's broken into tiny pieces called packets - each one typically around 1,500 bytes.
Each packet:
- •Has a header with the source IP, destination IP, and packet number
- •Travels independently through the network
- •May take completely different routes to reach you
- •Is reassembled in order when all packets arrive
A single webpage might require hundreds or thousands of packets. They bounce between routers across cities, countries, and sometimes oceans - through fiber optic cables, undersea cables, and even satellite links.
The routing is handled by routers that read each packet's destination and forward it to the next hop. It's like a postal system, but operating at the speed of light.
Step 7: Your Browser Renders the Page
Once all the packets arrive and are reassembled, your browser has the raw HTML. But it's far from done.
Parsing HTML
The browser reads the HTML and builds a DOM (Document Object Model) - a tree structure representing every element on the page.
Loading Resources
As it parses, it discovers the page needs more stuff:
- •CSS files for styling
- •JavaScript files for interactivity
- •Images, fonts, and videos
Each of these triggers additional HTTP requests, and each one goes through the same packet-routing process.
Rendering
Once it has everything, the browser:
- 1Applies CSS styles to the DOM elements
- 2Calculates the layout - where everything goes on screen
- 3Paints pixels onto the screen
- 4Executes JavaScript to make things interactive
All of this happens in parallel, with the browser prioritizing visible content so you see something useful as quickly as possible. This is why pages often load progressively - text first, then images, then interactive elements.
The Physical Infrastructure
Let's zoom out and talk about the physical hardware that makes all of this possible.
Undersea Cables
Over 95% of international internet traffic travels through submarine fiber optic cables laid across ocean floors. There are currently over 500 of these cables, some stretching thousands of miles.
They're about the width of a garden hose, armored in steel, and carry data as pulses of light through glass fibers thinner than a human hair. A single cable can carry hundreds of terabits per second.
Data Centers
Major tech companies operate massive data centers around the world. Google alone has over 30 data centers on four continents. These buildings are filled with thousands of servers, consume enormous amounts of electricity, and require sophisticated cooling systems.
When you visit a website, you're usually connecting to a server in the data center closest to you. This is why loading times vary by location.
Content Delivery Networks (CDNs)
To make things faster, companies use CDNs - networks of servers distributed around the world that cache copies of content. When you load a news article, the images might come from a CDN server just a few miles away, even though the article itself came from a server on another continent.
Internet Exchange Points (IXPs)
ISPs and networks connect to each other at Internet Exchange Points - physical locations where different networks meet and exchange traffic. Major cities have large IXPs that handle enormous amounts of data daily.
Fun Facts About the Internet
- •The first message ever sent on the internet (ARPANET, 1969) was "LO" - the system crashed before they could finish typing "LOGIN"
- •About 5.5 billion people use the internet as of 2026
- •The total amount of data on the internet is estimated at over 120 zettabytes
- •Light takes about 70 milliseconds to travel through fiber from New York to London
- •The internet uses roughly 10% of the world's electricity
Why This Matters
Understanding how the internet works isn't just trivia. It helps you:
- •Troubleshoot problems - Is it DNS? Is the server down? Is it your ISP?
- •Stay secure - Knowing about HTTPS helps you spot insecure connections
- •Make better choices - Understanding CDNs and caching helps if you're building websites
- •Appreciate the engineering - The fact that all of this happens in under a second, billions of times per day, is genuinely remarkable
Next time a page loads slowly, you'll have a much better idea of which part of this incredible chain might be the bottleneck.