40 Linux Commands Every Developer Should Know -- With Real Examples
40 Linux Commands Every Developer Should Know -- With Real Examples
Whether you are deploying to a server, working inside Docker containers, managing a VPS, or using macOS (which shares most of these commands), knowing your way around the terminal is not optional. It is a core skill.
This is not a reference manual. These are the commands you will actually reach for as a developer, organized by what you are trying to do, with real examples you can run right now.
File and Directory Basics
ls -- List Directory Contents
> ls # list files in current directory
> ls -la # list ALL files (including hidden) with details
> ls -lh # human-readable file sizes (KB, MB, GB)
> ls -lt # sort by modification time, newest first
> ls -lS # sort by file size, largest first
> ls *.js # list only JavaScript files
The -la combination is probably the command you will type most often. It shows permissions, owner, size, and modification date for every file, including hidden ones (those starting with a dot).
cd -- Change Directory
> cd /var/log # go to absolute path
> cd .. # go up one level
> cd ~ # go to home directory
> cd - # go back to previous directory
> cd ../.. # go up two levels
cd - is incredibly useful. It acts like an undo button for navigation -- press it repeatedly to toggle between two directories.
mkdir and rmdir -- Create and Remove Directories
> mkdir my-project # create a directory
> mkdir -p src/components/ui # create nested directories (all at once)
> rmdir empty-directory # remove an empty directory
The -p flag is essential. Without it, you cannot create nested paths in a single command.
cp -- Copy Files and Directories
> cp file.txt backup.txt # copy a file
> cp file.txt /some/other/path/ # copy to a different directory
> cp -r src/ src-backup/ # copy a directory recursively
> cp -i file.txt backup.txt # ask before overwriting
mv -- Move or Rename
> mv old-name.txt new-name.txt # rename a file
> mv file.txt ../ # move file up one directory
> mv *.log /var/log/archive/ # move all log files
rm -- Remove Files
> rm file.txt # remove a file
> rm -r directory/ # remove a directory and everything inside
> rm -i file.txt # ask for confirmation before deleting
> rm *.tmp # remove all .tmp files
Be careful with rm -r. There is no trash can. Deleted files are gone. Consider using rm -ri for important operations so it asks before each deletion.
touch -- Create Empty Files
> touch index.html # create a new empty file
> touch file1.txt file2.txt file3.txt # create multiple files at once
If the file already exists, touch updates its modification timestamp without changing the contents.
Viewing and Searching File Contents
cat -- Display File Contents
> cat README.md # display entire file
> cat -n server.log # display with line numbers
> cat file1.txt file2.txt > merged.txt # combine two files into one
For long files, cat dumps everything to the screen at once. Use less instead.
less -- View Files with Scrolling
> less large-log-file.log
Inside less:
- •Space or f -- scroll forward one page
- •b -- scroll backward one page
- •/pattern -- search forward for "pattern"
- •n -- jump to next search match
- •q -- quit
Less is better than cat for any file longer than your terminal screen.
head and tail -- View Start or End of Files
> head -20 server.log # first 20 lines
> tail -50 server.log # last 50 lines
> tail -f server.log # follow the file in real time (live updates)
tail -f is essential for watching log files during development. It streams new lines as they are appended to the file. Press Ctrl+C to stop.
grep -- Search Inside Files
> grep "error" server.log # find lines containing "error"
> grep -i "error" server.log # case-insensitive search
> grep -r "TODO" src/ # search recursively in a directory
> grep -n "function" app.js # show line numbers
> grep -c "error" server.log # count matching lines
> grep -v "debug" server.log # show lines NOT containing "debug"
> grep -l "import React" src/*/.tsx # list files that match
grep -r is how you search an entire codebase from the command line. Combine with -n to get line numbers and -i for case-insensitive matching.
find -- Locate Files
> find . -name "*.js" # find all JS files
> find . -name "*.log" -mtime -7 # log files modified in last 7 days
> find . -type d -name "node_modules" # find all node_modules directories
> find . -size +100M # files larger than 100MB
> find . -name "*.tmp" -delete # find and delete all .tmp files
> find . -empty -type f # find all empty files
wc -- Count Lines, Words, Characters
> wc -l server.log # count lines
> wc -w README.md # count words
> wc -l src/*/.ts # count lines in all TypeScript files
Text Processing
sed -- Stream Editor (Find and Replace)
> sed 's/old/new/' file.txt # replace first occurrence per line
> sed 's/old/new/g' file.txt # replace ALL occurrences
> sed -i 's/old/new/g' file.txt # edit the file in place
> sed -n '10,20p' file.txt # print only lines 10-20
> sed '/^#/d' config.txt # delete all comment lines
sed -i modifies the file directly. On macOS, you need sed -i '' 's/old/new/g' file.txt (empty string after -i).
awk -- Pattern Scanning and Processing
> awk '{print $1}' file.txt # print first column
> awk '{print $1, $3}' file.txt # print first and third columns
> awk -F',' '{print $2}' data.csv # use comma as delimiter
> awk '{sum += $1} END {print sum}' nums # sum all numbers in first column
> awk 'NR==5,NR==10' file.txt # print lines 5 through 10
Awk treats each line as a series of fields separated by whitespace (or a custom delimiter with -F). $1 is the first field, $2 is the second, and so on. $0 is the entire line.
sort and uniq -- Sort and Deduplicate
> sort file.txt # sort lines alphabetically
> sort -n numbers.txt # sort numerically
> sort -r file.txt # sort in reverse
> sort file.txt | uniq # remove duplicate lines
> sort file.txt | uniq -c # count occurrences of each line
> sort file.txt | uniq -c | sort -rn # most frequent lines first
The combination sort | uniq -c | sort -rn is incredibly useful for analyzing log files. It shows you the most common entries.
cut -- Extract Columns
> cut -d',' -f2 data.csv # extract second column from CSV
> cut -d':' -f1 /etc/passwd # extract usernames
> cut -c1-10 file.txt # extract first 10 characters per line
Pipes and Redirection
Pipes connect the output of one command to the input of another. This is where Linux commands become truly powerful.
The Pipe (|)
> cat server.log | grep "error" | wc -l
> # Translation: read the log, find lines with "error", count them
> ps aux | grep "node" | grep -v "grep"
> # Find running Node.js processes
> history | grep "git" | tail -20
> # Last 20 git commands you ran
Redirection
> echo "hello" > file.txt # write to file (overwrites)
> echo "world" >> file.txt # append to file
> command 2> errors.log # redirect errors to a file
> command > output.log 2>&1 # redirect both stdout and stderr
> command > /dev/null 2>&1 # discard all output (silent mode)
Permissions and Ownership
chmod -- Change File Permissions
> chmod +x script.sh # make a file executable
> chmod 755 script.sh # owner: read/write/execute, others: read/execute
> chmod 644 config.json # owner: read/write, others: read only
> chmod -R 755 directory/ # apply recursively to all files in a directory
The numeric system: 4 = read, 2 = write, 1 = execute. Add them up for each user class (owner, group, others). So 755 means owner gets 7 (4+2+1 = read+write+execute), group and others get 5 (4+1 = read+execute).
chown -- Change Ownership
> chown user:group file.txt # change owner and group
> chown -R user:group directory/ # change recursively
> chown www-data:www-data /var/www # common for web servers
Networking
curl -- Make HTTP Requests
> curl https://api.example.com/users # GET request
> curl -X POST -d '{"name":"Alice"}' -H "Content-Type: application/json" https://api.example.com/users # POST with JSON
> curl -I https://example.com # headers only
> curl -o file.zip https://example.com/file.zip # download a file
> curl -s https://api.example.com/health | jq . # silent mode, pipe to jq for pretty JSON
wget -- Download Files
> wget https://example.com/file.zip # download a file
> wget -r -l 1 https://example.com # download a page and its linked resources
netstat and ss -- Network Connections
> ss -tlnp # show all listening TCP ports and which process owns them
> ss -tlnp | grep 3000 # check what is running on port 3000
ping and dig -- Network Diagnostics
> ping google.com # check if a host is reachable
> dig example.com # DNS lookup
> dig +short example.com # just the IP address
Process Management
ps -- List Processes
> ps aux # list all running processes
> ps aux | grep node # find Node.js processes
> ps aux --sort=-%mem | head -10 # top 10 memory-consuming processes
kill -- Stop Processes
> kill 12345 # gracefully stop process with PID 12345
> kill -9 12345 # force kill (use as last resort)
> killall node # kill all processes named "node"
top and htop -- System Monitor
> top # real-time process viewer
> htop # better version (install with apt/brew)
Inside top/htop:
- •P -- sort by CPU usage
- •M -- sort by memory usage
- •k -- kill a process
- •q -- quit
Disk and System Info
df and du -- Disk Usage
> df -h # show disk space for all partitions
> du -sh * # size of each item in current directory
> du -sh node_modules/ # size of node_modules
> du -sh * | sort -rh | head -10 # top 10 largest items
free -- Memory Usage
> free -h # show RAM usage in human-readable format
uname -- System Info
> uname -a # all system info
> uname -r # kernel version
Compression
tar -- Archive Files
> tar -czf archive.tar.gz directory/ # create compressed archive
> tar -xzf archive.tar.gz # extract compressed archive
> tar -tzf archive.tar.gz # list contents without extracting
The flags: c = create, x = extract, z = gzip compression, f = filename, t = list.
zip and unzip
> zip -r project.zip project/ # create a zip file
> unzip project.zip # extract a zip file
> unzip -l project.zip # list contents without extracting
SSH and Remote Servers
ssh -- Connect to Remote Servers
> ssh user@server.com # connect to a server
> ssh -p 2222 user@server.com # connect on a custom port
> ssh -i ~/.ssh/my-key user@server.com # connect with a specific key
scp -- Copy Files Over SSH
> scp file.txt user@server:/path/ # upload a file
> scp user@server:/path/file.txt ./ # download a file
> scp -r directory/ user@server:/path/ # upload a directory
Putting It All Together
The real power of Linux commands comes from combining them. Here are patterns you will use constantly:
Find the largest files in a project
> find . -type f -exec du -sh {} + | sort -rh | head -20
Count lines of code by file type
> find . -name ".ts" -not -path "/node_modules/*" | xargs wc -l | sort -n | tail -20
Watch a log file and filter for errors
> tail -f /var/log/app.log | grep --line-buffered "ERROR"
Find and replace across multiple files
> find . -name "*.ts" -exec sed -i 's/oldFunction/newFunction/g' {} +
Check which ports are in use
> ss -tlnp | awk 'NR>1 {print $4}' | sort -u
Clean up old files
> find /tmp -type f -mtime +30 -delete
Installing Commands You Are Missing
Some commands (like htop, jq, tree) might not be installed by default:
macOS (Homebrew):
brew install htop jq tree wget
Ubuntu/Debian:
sudo apt install htop jq tree wget
CentOS/RHEL:
sudo yum install htop jq tree wget
The 10 Commands You Will Use Every Day
If you remember nothing else:
- 1ls -la -- see what is in a directory
- 2cd / cd - -- navigate and jump back
- 3grep -rn -- search for text in files
- 4tail -f -- watch logs in real time
- 5ps aux | grep -- find running processes
- 6chmod +x -- make scripts executable
- 7curl -- test APIs from the terminal
- 8find . -name -- locate files
- 9du -sh * -- check what is using disk space
- 10sort | uniq -c | sort -rn -- analyze frequency in logs
The terminal is not about memorizing hundreds of commands. It is about knowing 30-40 well and combining them creatively with pipes and redirection. That skill compounds every single day.
For more developer guides and free tools, check out our blog and explore our developer tools.