Auditing your Catalyst 1300 fleet one switch at a time? If your IP list is stacking up and those sequential SSH sessions are turning a quick job into an all-afternoon affair, it’s time to parallelize. Building on our C1300-tailored audit script (full configs, interface status, versions, and clocks via datadump), this post unleashes multi-threading with Python’s concurrent.futures
. Fire off 5 (or 10) concurrent sessions, and watch your 30-switch run drop from 10 minutes to under 2—without breaking a sweat on your AAA server.
Perfect for industrial edges where downtime’s not an option: grab baselines pre-upgrade or spot port flaps across the board. Tune the thread count on the fly, and keep those timestamped bundles (e.g., 192_168_1_10_20251016.txt
) flowing into D:\python. Let’s thread the needle on efficiency.
Why Multi-Threading for Network Audits?
SSH to Cisco gear is I/O-heavy—network waits, not CPU crunches—so threads parallelize beautifully, dodging Python’s GIL. Sequential backups crawl; parallel ones sprint. Set max_workers
low (2-3) for fragile setups, crank it for robust ones. Pro: Faster insights into C1300 quirks like unpaginated outputs. Con: Overdo it, and your TACACS might throttle—monitor and tune.
Prerequisites
- Python 3.x: With
pip install paramiko
(no extras needed for threading—it’s stdlib). - SSH Privs: Direct privileged access to IE-1300s (no enable; test full
show run
on login). - IP File:
access_switch.txt
, one IP per line. - Output Dir: D:\python\ auto-created for audit files.
Threading Caution: Start conservative—2 threads for testing, scale based on your auth backend’s tolerance.
The Script: Threaded for Speed
Save as cisco_1300_audit_multithread.py
. Core audit logic stays C1300-optimized (datadump for full pulls); threading wraps the loop.
import paramiko
import sys
import time
import os
from datetime import datetime # For timestamped filenames
from concurrent.futures import ThreadPoolExecutor, as_completed # For multi-threading
# HARDCODED CREDENTIALS - REPLACE WITH YOURS (SECURE IN PROD)
username = 'your_username' # e.g., 'admin'
password = 'your_password' # e.g., 'cisco123'
# Path to TXT file with switch IPs (one per line)
ips_file_path = r'C:\path\to\your\access_switch.txt' # Adjust for your OS
# Local backup directory - creates if missing
backup_dir = r'D:\python'
os.makedirs(backup_dir, exist_ok=True)
# THREADING CONFIG - ADJUST HERE FOR PARALLELISM
max_workers = 5 # Number of concurrent SSH sessions; tune based on your network/AAA capacity (e.g., 2 for small, 10 for large)
def send_and_capture(shell, command, timeout=5):
"""
Helper: Send a command via shell, capture full output.
"""
shell.send(command + '\n')
time.sleep(timeout) # Initial buffer
output = ''
max_wait = 30 # Safeguard loop
wait_count = 0
while wait_count < max_wait:
if shell.recv_ready():
output += shell.recv(4096).decode('utf-8')
wait_count = 0
else:
time.sleep(0.1)
wait_count += 1
return output
def backup_config(ip, username, password):
"""
SSH to Cisco C1300 switch, enable datadump, grab multiple show outputs (no enable needed), and save bundled to local file with IP+date.
Returns filename on success or None on failure.
"""
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
timestamp = datetime.now().strftime('%Y%m%d') # e.g., 20251016
filename = f"{ip.replace('.', '_')}_{timestamp}.txt" # e.g., 192_168_1_10_20251016.txt
filepath = os.path.join(backup_dir, filename)
try:
# Connect
client.connect(
hostname=ip,
username=username,
password=password,
timeout=10,
look_for_keys=False,
allow_agent=False
)
# Interactive shell
shell = client.invoke_shell()
time.sleep(2)
# Disable pagination with C1300-specific command
shell.send('terminal datadump\n')
time.sleep(1)
while not shell.recv_ready():
time.sleep(0.1)
shell.recv(4096).decode('utf-8') # Drain echo
# Capture outputs
run_config = send_and_capture(shell, 'show running-config', timeout=10) # Longer for config
int_status = send_and_capture(shell, 'show interfaces status')
version = send_and_capture(shell, 'show version')
clock = send_and_capture(shell, 'show clock')
# Exit
shell.send('exit\n')
time.sleep(1)
client.close()
# Bundle with separators
audit_text = f"""=== RUNNING CONFIG ===
{run_config}
=== INTERFACES STATUS ===
{int_status}
=== VERSION ===
{version}
=== CLOCK ===
{clock}
"""
# Trim to clean sections where possible
audit_text = audit_text.replace('\x1b[0m', '') # Strip ANSI codes if any
with open(filepath, 'w') as f:
f.write(audit_text)
line_count = len(audit_text.splitlines())
print(f"Audit saved: {filepath} ({line_count} lines)")
return filepath
except paramiko.AuthenticationException:
print(f"Auth failed on {ip}")
return None
except paramiko.SSHException as e:
print(f"SSH error on {ip}: {e}")
return None
except Exception as e:
print(f"Unexpected error on {ip}: {e}")
return None
finally:
try:
client.close()
except:
pass
if __name__ == "__main__":
# Validate file
if not os.path.exists(ips_file_path):
print(f"ERROR: IP file not found: {ips_file_path}")
sys.exit(1)
# Load IPs
with open(ips_file_path, 'r') as f:
switch_ips = [line.strip() for line in f if line.strip()]
if not switch_ips:
print("ERROR: No IPs in file.")
sys.exit(1)
# Cred check
if username == 'your_username' or password == 'your_password':
print("ERROR: Update credentials in script (lines 8-9).")
sys.exit(1)
print(f"Loaded {len(switch_ips)} switches from {ips_file_path}")
print(f"Audits will save to {backup_dir} (using {max_workers} threads)")
print("\nStarting threaded device audits...\n")
# Multi-threaded execution
success_count = 0
with ThreadPoolExecutor(max_workers=max_workers) as executor:
# Submit all tasks
future_to_ip = {executor.submit(backup_config, ip, username, password): ip for ip in switch_ips}
# Collect results as they complete
for future in as_completed(future_to_ip):
ip = future_to_ip[future]
try:
result = future.result()
if result:
success_count += 1
except Exception as exc:
print(f"Thread generated exception for {ip}: {exc}")
print(f"\nAudit complete: {success_count}/{len(switch_ips)} successful.")
print("Check D:\\python for bundled files.")
Threading Breakdown
- Imports:
ThreadPoolExecutor
andas_completed
handle the parallelism—stdlib, no installs. - Tune Knob:
max_workers = 5
(line 15)—your concurrency dial. Edit here: 2 for light loads, 10 for heavy hitters. - Audit Core:
backup_config
andsend_and_capture
unchanged—C1300 datadump ensures full, unpaginated grabs. - Parallel Loop:
- Submits all IPs as futures to the executor pool.
as_completed
processes finishes as they land (interleaved output—progress feels alive).- Tracks successes, catches thread flubs without halting.
- Cleanup: Pool auto-closes; connections tidy in
finally
.
Launch Sequence
- Set
max_workers
, creds, andips_file_path
. - Beef up
access_switch.txt
(e.g., 20 IPs) to feel the speed. - Execute:
python cisco_1300_audit_multithread.py
. - Sample Console (zips along, prints mix):
Loaded 20 switches from C:\path\to\your\access_switch.txt
Audits will save to D:\python (using 5 threads)
Starting threaded device audits...
Audit saved: D:\python\192_168_1_5_20251016.txt (756 lines)
Auth failed on 192.168.1.6
Audit saved: D:\python\192_168_1_10_20251016.txt (823 lines)
SSH error on 10.0.0.3: Connection refused
... [blitz of saves] ...
Audit complete: 17/20 successful.
Check D:\python for bundled files.
Files: Bundled sections—configs, ports, versions, clocks. No truncates, pure C1300 syntax.
Field Hacks
- Thread Tuning: Watch AAA logs; if floods hit, drop to 3. For 100+ switches, add a semaphore for finer control.
- Output Order: Interleaved? Pipe to a log file:
python script.py > audit.log 2>&1
. - C1300 Edge Cases: Datadump holds; if timeouts creep, bump
max_wait
to 45s for verbose runs. - Beyond Basics: Add
show env all
? Slot it insend_and_capture
. Or export to CSV withpandas
post-run. - Single-Thread Fallback: Set
max_workers=1
for debug—reverts seamlessly. - Scale Smart: Pair with cron for nightly threads—your industrial nets stay crisp.