-
Notifications
You must be signed in to change notification settings - Fork 100
Dev Docs Performance Network
WIP - ALL LINKS IN THIS WIKI STRUCTURE ARE CURRENTLY BROKEN DURING WIKI MIGRATION
THESE ARE COMMUNITY DOCS
NB; The Linux suggestions in this page have not been tested. The settings may be out of date for modern Linux distos
Only the FreeBSD items are in active lab testing (Andy Lemin)
Do NOT apply the items in this guide to production systems without testing them and understanding them fully first
Do NOT copy-paste all these settings, test one setting at a time, and verify. TCP and Kernel tuning is HARD! If it were easy the defaults would be perfect for everyone and every environment already. Please know tuning is about customising for your particular environment. If you have no special need or problems, then the defaults are fine for you.
Always, always, always have backups. These guides are offered with no support or guarantee by volunteers
These suggestions are not from the Netatalk developers. Only community contributions. The official documents should always be prioritized
Good luck, and happy tuning..
This guide provides comprehensive DSI tunable optimization for high-speed networks (1-10 Gbps) with detailed parameter analysis, memory considerations, and practical configuration examples.
Netatalk's AFP over DSI implementation contains numerous performance parameters that must be carefully tuned for high-speed networking environments. Default values are conservative and designed for compatibility rather than maximum performance.
The following sections are offered for infomational purposes only, every network/server/client setup is different. Change one setting at a time until desired performance is achieved or bottleneck is understood.
When testing any settings as suggested here, always take backups first
- Purpose: Controls maximum DSI data transfer size per operation
-
Default:
0x100000
(1 MB) -
Range:
32000
-0xFFFFFFFFF
- Impact: Primary bottleneck for high-speed transfers
High-Speed Recommendations:
-
1 Gbps:
0x400000
(4 MB) -
10 Gbps:
0x1000000
(16 MB) -
40+ Gbps:
0x4000000
(64 MB)
Binary Value Calculation:
# Convert MB to bytes, then to hexadecimal
4 MB = 4 × 1024 × 1024 = 4194304 bytes = 0x400000
16 MB = 16 × 1024 × 1024 = 16777216 bytes = 0x1000000
64 MB = 64 × 1024 × 1024 = 67108864 bytes = 0x4000000
# Verification with calculator or script:
printf "0x%X\n" 4194304 # Output: 0x400000
printf "0x%X\n" 16777216 # Output: 0x1000000
printf "0x%X\n" 67108864 # Output: 0x4000000
- Purpose: Multiplier for total buffer size per client
-
Formula:
buffer_size = dsireadbuf × server_quantum
-
Default:
12
(12 MB total with 1 MB quantum) -
Range:
6
-512
High-Speed Recommendations:
-
1 Gbps:
8
(32 MB per client with 4 MB quantum) -
10 Gbps:
6
(96 MB per client with 16 MB quantum) -
Memory-constrained:
4
(minimum effective buffering)
- Purpose: Kernel TCP receive buffer size
-
Default:
0
(system default, typically 64-128 KB) - Impact: Critical for high-bandwidth × delay networks
Bandwidth-Delay Product Calculations:
BDP = Bandwidth × RTT
1 Gbps × 10ms = 1.25 MB
10 Gbps × 10ms = 12.5 MB
10 Gbps × 100ms = 125 MB (WAN)
High-Speed Recommendations:
-
1 Gbps LAN:
2097152
(2 MB) -
10 Gbps LAN:
16777216
(16 MB) -
10 Gbps WAN:
134217728
(128 MB)
- Purpose: Kernel TCP send buffer size
-
Default:
0
(system default) - Impact: Critical for sustained high-speed uploads
High-Speed Recommendations:
-
1 Gbps:
2097152
(2 MB) -
10 Gbps:
16777216
(16 MB) -
High-latency: Match
tcprcvbuf
values
- Purpose: Buffer size for splice() system call optimization
-
Default:
65536
(64 KB) - Impact: Reduces user-kernel copies for large transfers
-
Range:
4096
-1048576
High-Speed Recommendations:
-
1 Gbps:
262144
(256 KB) -
10 Gbps:
1048576
(1 MB)
- Purpose: Enable sendfile() for zero-copy file reads
-
Default:
true
- Impact: Eliminates user-space buffer copies
- Critical: Must be enabled for optimal performance
- Purpose: Number of cached directory/file entries
-
Default:
8192
- Impact: Reduces filesystem lookups for metadata operations
High-Speed Recommendations:
-
1 Gbps:
16384
(moderate increase) -
10 Gbps:
32768
(high metadata workload) -
Memory-rich:
65536
(maximum effectiveness)
Each AFP client connection allocates:
-
Command buffer:
server_quantum
bytes -
Readahead buffer:
dsireadbuf × server_quantum
bytes -
TCP buffers:
tcprcvbuf + tcpsndbuf
bytes - Directory cache: Shared across all clients
Total_Memory = Clients × (server_quantum + (dsireadbuf × server_quantum) + tcprcvbuf + tcpsndbuf)
server quantum = 0x400000 # 4 MB transfers (4194304 bytes)
dsireadbuf = 8
tcprcvbuf = 2097152 # 2 MB
tcpsndbuf = 2097152 # 2 MB
Per client: 4 + (8 × 4) + 2 + 2 = 40 MB
Total: 50 × 40 MB = 2 GB
server quantum = 0x1000000 # 16 MB transfers (16777216 bytes)
dsireadbuf = 6
tcprcvbuf = 16777216 # 16 MB
tcpsndbuf = 16777216 # 16 MB
Per client: 16 + (6 × 16) + 16 + 16 = 144 MB
Total: 20 × 144 MB = 2.88 GB
[Global]
# DSI Protocol Tuning
server quantum = 0x400000 # 4 MB transfers (4194304 bytes)
dsireadbuf = 8 # 32 MB buffer per client
# TCP Socket Optimization
tcprcvbuf = 2097152 # 2 MB receive buffer
tcpsndbuf = 2097152 # 2 MB send buffer
# Zero-copy optimizations
use sendfile = true
splice size = 262144 # 256 KB splice buffer
# Directory cache tuning
dircachesize = 16384 # 16K cached entries
# Expected performance: ~800-900 Mbps sustained throughput
[Global]
# DSI Protocol Tuning
server quantum = 0x1000000 # 16 MB transfers (16777216 bytes)
dsireadbuf = 6 # 96 MB buffer per client
# TCP Socket Optimization
tcprcvbuf = 16777216 # 16 MB receive buffer
tcpsndbuf = 16777216 # 16 MB send buffer
# Zero-copy optimizations
use sendfile = true
splice size = 1048576 # 1 MB splice buffer
# Directory cache tuning
dircachesize = 32768 # 32K cached entries
# Expected performance: ~7-9 Gbps sustained throughput
[Global]
# DSI Protocol Tuning
server quantum = 0x1000000 # 16 MB transfers (16777216 bytes)
dsireadbuf = 8 # 128 MB buffer per client
# TCP Socket Optimization (High BDP)
tcprcvbuf = 134217728 # 128 MB receive buffer
tcpsndbuf = 134217728 # 128 MB send buffer
# Zero-copy optimizations
use sendfile = true
splice size = 1048576 # 1 MB splice buffer
# Directory cache tuning
dircachesize = 65536 # 64K cached entries
# Expected performance: ~8-10 Gbps with 100ms+ RTT
[Global]
# Balanced DSI Protocol Tuning
server quantum = 0x800000 # 8 MB transfers (8388608 bytes)
dsireadbuf = 4 # 32 MB buffer per client
# Moderate TCP Socket Optimization
tcprcvbuf = 8388608 # 8 MB receive buffer
tcpsndbuf = 8388608 # 8 MB send buffer
# Zero-copy optimizations
use sendfile = true
splice size = 524288 # 512 KB splice buffer
# Conservative directory cache
dircachesize = 16384 # 16K cached entries
# Memory per client: ~56 MB (suitable for 100+ concurrent clients)
# /etc/sysctl.conf or /etc/sysctl.d/99-netatalk-performance.conf
# Network buffer size limits (critical for high-speed networking)
net.core.rmem_max = 268435456 # 256 MB max receive buffer
net.core.wmem_max = 268435456 # 256 MB max send buffer
net.core.rmem_default = 16777216 # 16 MB default receive buffer
net.core.wmem_default = 16777216 # 16 MB default send buffer
# TCP buffer auto-tuning (min default max)
net.ipv4.tcp_rmem = 4096 131072 268435456 # TCP receive buffers
net.ipv4.tcp_wmem = 4096 131072 268435456 # TCP send buffers
net.ipv4.tcp_mem = 786432 1048576 26843546 # TCP memory pressure thresholds
# Network device buffer sizes
net.core.netdev_max_backlog = 30000 # Network device queue length
net.core.netdev_budget = 600 # NAPI budget for packet processing
# TCP congestion control and window scaling
net.ipv4.tcp_congestion_control = bbr # BBR congestion control (Linux 4.9+)
net.ipv4.tcp_window_scaling = 1 # Enable TCP window scaling
net.ipv4.tcp_timestamps = 0 # Disable TCP timestamps (slight perf gain)
net.ipv4.tcp_sack = 1 # Enable selective acknowledgments
# TCP connection tuning
net.ipv4.tcp_fin_timeout = 15 # Reduce FIN timeout
net.ipv4.tcp_keepalive_time = 600 # TCP keepalive timer
net.ipv4.tcp_keepalive_probes = 3 # Number of keepalive probes
net.ipv4.tcp_keepalive_intvl = 15 # Keepalive probe interval
# TCP fast recovery and retransmission
net.ipv4.tcp_frto = 2 # F-RTO (Forward RTO-Recovery)
net.ipv4.tcp_reordering = 3 # Expected packet reordering
net.ipv4.tcp_retries2 = 8 # TCP retransmit attempts
# Connection tracking (reduce for high-speed servers)
net.netfilter.nf_conntrack_max = 524288 # Maximum connection tracking entries
net.netfilter.nf_conntrack_tcp_timeout_established = 7200 # 2 hours
# Virtual memory tuning
vm.swappiness = 1 # Minimize swapping (keep in RAM)
vm.dirty_ratio = 5 # Dirty pages % of memory before sync
vm.dirty_background_ratio = 2 # Background writeback threshold
vm.dirty_expire_centisecs = 1500 # Dirty page expiration (15 seconds)
vm.dirty_writeback_centisecs = 500 # Writeback daemon interval (5 seconds)
# Memory allocation and OOM handling
vm.overcommit_memory = 1 # Allow memory overcommit
vm.overcommit_ratio = 80 # Overcommit ratio percentage
vm.min_free_kbytes = 65536 # Keep 64MB free for network buffers
# Process limits
kernel.pid_max = 4194304 # Maximum process IDs
fs.file-max = 2097152 # System-wide file descriptor limit
fs.nr_open = 2097152 # Per-process file descriptor limit
# Shared memory limits (for CNID database)
kernel.shmmax = 1073741824 # 1 GB maximum shared memory segment
kernel.shmall = 268435456 # Total shared memory pages
kernel.shmmni = 4096 # Maximum shared memory segments
# Block device I/O optimization
vm.block_dump = 0 # Disable block device debugging
vm.laptop_mode = 0 # Disable laptop mode (always performance)
# Filesystem cache tuning
vm.vfs_cache_pressure = 50 # Balance inode/dentry cache pressure
vm.page-cluster = 3 # Readahead clustering (8 pages)
# Transparent Huge Pages (THP) - may help with large buffers
echo always > /sys/kernel/mm/transparent_hugepage/enabled
echo always > /sys/kernel/mm/transparent_hugepage/defrag
# Apply immediately
sysctl -p /etc/sysctl.d/99-netatalk-performance.conf
# Verify settings
sysctl net.core.rmem_max
sysctl net.ipv4.tcp_congestion_control
# /etc/sysctl.conf
# Network buffer size limits
net.inet.tcp.recvbuf_max=268435456 # 256 MB max TCP receive buffer
net.inet.tcp.sendbuf_max=268435456 # 256 MB max TCP send buffer
net.inet.tcp.recvspace=16777216 # 16 MB default TCP receive buffer
net.inet.tcp.sendspace=16777216 # 16 MB default TCP send buffer
# Network mbuf clusters (critical for high-speed networking)
kern.ipc.nmbclusters=262144 # Number of mbuf clusters (auto set according to memory)
kern.ipc.nmbufs=524288 # Number of mbufs (auto set according to memory)
net.inet.tcp.sendspace_max=16777216 # Maximum TCP send buffer (10Gbe)
net.inet.tcp.recvspace_max=16777216 # Maximum TCP receive buffer (10Gbe)
# TCP window scaling and timestamps
net.inet.tcp.rfc1323=1 # Enable window scaling and timestamps
net.inet.tcp.window_scaling=1 # Enable TCP window scaling (legacy)
# TCP congestion control
net.inet.tcp.cc.algorithm=newreno # TCP congestion control (or cubic) (requires `kldload cc_newreno`, or cc_newreno_load="YES" in /boot/loader.conf)
net.inet.tcp.cc.newreno.beta=70 # NewReno beta parameter
# TCP Congestion Control Algorithm Comparison:
# NewReno vs CUBIC - Choose based on network characteristics and AFP workload
The choice of TCP congestion control algorithm significantly impacts AFP performance, especially in high-speed and high-latency networks.
Algorithm Behavior:
- Conservative approach: Linear window growth during congestion avoidance
- Fast recovery: Multiplicative decrease on packet loss detection
- RTT-based: Window growth rate inversely proportional to RTT
- Loss-based: Reacts primarily to packet loss events
Performance Profile:
Window Size Growth: Linear (additive increase)
Loss Response: Fast recovery with 50% window reduction
Fairness: Excellent among competing flows
Stability: High stability, predictable behavior
Optimal Use Cases for AFP:
- Low-latency LANs (< 10ms RTT): Excellent responsiveness
- Reliable networks with minimal packet loss
- Mixed workloads with many concurrent AFP sessions
- Legacy network equipment compatibility
Algorithm Behavior:
- Aggressive growth: Cubic function window scaling
- RTT-independent: Growth rate not affected by round-trip time
- Bandwidth-probing: Actively probes for available bandwidth
- Loss-based with optimization: Enhanced recovery mechanisms
Performance Profile:
Window Size Growth: Cubic function (more aggressive)
Loss Response: Optimized fast recovery
Fairness: Good, but can dominate over NewReno
Stability: Less predictable, more dynamic
Optimal Use Cases for AFP:
- High-bandwidth networks (1+ Gbps) with available capacity
- High-latency WANs (100+ ms RTT): RTT independence advantage
- Single or few large transfers: Maximizes throughput
- Modern network infrastructure with good buffering
Characteristic | NewReno | CUBIC | AFP Impact |
---|---|---|---|
Low Latency (<10ms) | Excellent | Good | NewReno: Better interactive performance |
High Latency (>100ms) | Poor | Excellent | CUBIC: Better file transfer speeds |
High Bandwidth (10+ Gbps) | Poor | Excellent | CUBIC: Can utilize full bandwidth |
Packet Loss Recovery | Standard | Enhanced | CUBIC: Faster recovery from losses |
Multiple Concurrent Flows | Excellent | Good | NewReno: Better fairness among clients |
Single Large Transfers | Moderate | Excellent | CUBIC: Higher throughput potential |
Network Variability | Stable | Adaptive | NewReno: More predictable performance |
Buffer Requirements | Moderate | High | CUBIC: Needs larger TCP buffers |
1 Gbps Networks:
# FreeBSD: NewReno recommended for balanced performance
net.inet.tcp.cc.algorithm=newreno # TCP congestion control (requires `kldload cc_newreno`, or cc_newreno_load="YES" in /boot/loader.conf)
net.inet.tcp.cc.newreno.beta=70 # Conservative reduction
# Linux: CUBIC acceptable but may be overkill
net.ipv4.tcp_congestion_control=cubic
# Reasoning: Network bandwidth limits throughput more than algorithm choice
10 Gbps Networks:
# FreeBSD: CUBIC recommended for high throughput
net.inet.tcp.cc.algorithm=cubic
net.inet.tcp.cc.cubic.beta=717 # CUBIC-specific beta parameter
# Linux: CUBIC highly recommended
net.ipv4.tcp_congestion_control=cubic
# Reasoning: CUBIC's aggressive scaling helps utilize available bandwidth
40+ Gbps Networks:
# FreeBSD: CUBIC essential for maximum utilization
net.inet.tcp.cc.algorithm=cubic
net.inet.tcp.cc.cubic.fast_convergence=1 # Enable fast convergence
# Linux: BBR preferred over CUBIC at extreme speeds
net.ipv4.tcp_congestion_control=bbr
# Reasoning: Traditional loss-based algorithms struggle at extreme speeds
Interactive AFP Use (Browse, Small Files):
- Preferred: NewReno
- Reason: Lower latency, more predictable response times
- Buffer sizing: Smaller TCP buffers acceptable
- Best for: Desktop productivity, document access
# Optimized for interactive AFP workloads
net.inet.tcp.cc.algorithm=newreno # FreeBSD requires `kldload cc_newreno`, or cc_newreno_load="YES" in /boot/loader.conf
tcprcvbuf = 1048576 # 1 MB buffers sufficient
tcpsndbuf = 1048576
server quantum = 0x200000 # 2 MB quantum for responsiveness (2097152 bytes)
Large File Transfers:
- Preferred: CUBIC (or BBR on Linux)
- Reason: Maximum throughput utilization
- Buffer sizing: Large TCP buffers essential
- Best for: Media files, backups, bulk transfers
# Optimized for large file AFP transfers
net.inet.tcp.cc.algorithm=cubic
tcprcvbuf = 16777216 # 16 MB buffers for throughput
tcpsndbuf = 16777216
server quantum = 0x1000000 # 16 MB quantum for efficiency (16777216 bytes)
Mixed Workloads:
- Preferred: NewReno (better fairness)
- Alternative: CUBIC with careful buffer tuning
- Reason: Balance between throughput and fairness
- Buffer sizing: Medium TCP buffers
DSI Write Pattern Impact:
# NewReno: Consistent performance with DSI write bursts
# - Linear growth suits AFP's bursty write patterns
# - Fast recovery from temporary congestion
# CUBIC: May over-react to DSI burst patterns
# - Aggressive growth can cause buffer bloat
# - Better with sustained large transfers
DSI Read-ahead Interaction:
# NewReno: Stable interaction with dsireadbuf
# - Predictable bandwidth utilization
# - Works well with moderate readahead values
# CUBIC: Can amplify readahead effectiveness
# - Aggressive probing matches readahead behavior
# - Requires larger dsireadbuf values for best results
Performance Testing by Algorithm:
# Test NewReno performance
sysctl net.inet.tcp.cc.algorithm=newreno
iperf3 -c afp_server -t 60 -P 4
# Test CUBIC performance
sysctl net.inet.tcp.cc.algorithm=cubic
iperf3 -c afp_server -t 60 -P 4
# Compare AFP-specific performance
# Large file copy via AFP
time cp large_file.dmg /Volumes/AFPVolume/
# Small file operations via AFP
time find /Volumes/AFPVolume/ -name "*.txt" | head -1000
Algorithm-Specific Monitoring:
# Monitor congestion window behavior
ss -i | grep cwnd # Linux
netstat -T | grep cwnd # FreeBSD
The TCP beta
parameter is a critical tunable that controls the multiplicative decrease behavior in TCP's AIMD (Additive Increase, Multiplicative Decrease) congestion control algorithm. Understanding and properly configuring beta parameters can significantly impact AFP performance, especially during congestion events and network recovery.
The default values for beta are well tuned, and should not need changing unless tuning for very specific network conditions.
The beta parameter determines how aggressively TCP reduces its congestion window (cwnd) when packet loss is detected:
New_cwnd = Current_cwnd × beta
Mathematical Relationship:
-
NewReno Default:
beta = 0.5
(50% reduction) -
CUBIC Default:
beta = 0.7
(30% reduction) - Range: 0.1 to 1.0 (10% to no reduction)
Congestion Window Behavior:
Normal Operation: cwnd += 1/cwnd per ACK (Additive Increase)
Loss Detection: cwnd = cwnd × beta (Multiplicative Decrease)
Recovery: cwnd grows from reduced value
Impact on AFP Performance:
- Higher beta (0.8-0.9): Faster recovery, more aggressive, higher throughput
- Lower beta (0.3-0.5): Conservative recovery, more stable, lower throughput
- Default beta (0.5-0.7): Balanced approach for most scenarios
NewReno Beta Configuration:
# FreeBSD NewReno beta parameter (range: 10-100, represents percentage)
net.inet.tcp.cc.newreno.beta=50 # Default: 50% reduction
net.inet.tcp.cc.newreno.beta=70 # Conservative: 30% reduction
net.inet.tcp.cc.newreno.beta=30 # Aggressive: 70% reduction
# View current setting
sysctl net.inet.tcp.cc.newreno.beta
CUBIC Beta Configuration:
# FreeBSD CUBIC beta parameter (range: 100-1000, represents 0.1-1.0)
net.inet.tcp.cc.cubic.beta=717 # Default: ~0.717 (28.3% reduction)
net.inet.tcp.cc.cubic.beta=819 # Conservative: ~0.819 (18.1% reduction)
net.inet.tcp.cc.cubic.beta=500 # Aggressive: 0.5 (50% reduction)
# View current setting
sysctl net.inet.tcp.cc.cubic.beta
Note: Linux beta parameters are typically compiled into the kernel and not directly tunable via sysctl. However, they can be modified through kernel modules or alternative congestion control algorithms.
Available Linux Controls:
# Select congestion control algorithm (affects beta behavior)
net.ipv4.tcp_congestion_control=cubic # Uses CUBIC's beta (~0.7)
net.ipv4.tcp_congestion_control=reno # Uses NewReno's beta (0.5)
net.ipv4.tcp_congestion_control=bbr # Uses different approach (no beta)
# View available algorithms
cat /proc/sys/net/ipv4/tcp_available_congestion_control
Recommended Configuration:
# FreeBSD: Higher beta for fast recovery
net.inet.tcp.cc.algorithm=cubic
net.inet.tcp.cc.cubic.beta=819 # Conservative reduction (18.1%)
# Reasoning: Fast networks can handle aggressive recovery
# - Minimal latency impact from congestion events
# - Fast link recovery allows higher beta values
# - Maximizes throughput utilization
Recommended Configuration:
# FreeBSD: Lower beta for stability
net.inet.tcp.cc.algorithm=cubic
net.inet.tcp.cc.cubic.beta=600 # Moderate reduction (40%)
# Reasoning: High-latency networks benefit from stability
# - Long RTT means slow recovery from aggressive reductions
# - Conservative beta prevents overcorrection
# - Maintains steady-state performance
Recommended Configuration:
# FreeBSD: Very conservative beta
net.inet.tcp.cc.algorithm=newreno
net.inet.tcp.cc.newreno.beta=70 # Very conservative (30% reduction)
# Reasoning: Wireless characteristics require careful handling
# - Packet loss may not indicate congestion
# - Variable latency complicates recovery timing
# - Conservative approach maintains connection stability
Beta Value | Window Reduction | Recovery Time | Throughput Impact | Stability |
---|---|---|---|---|
0.3 | 70% reduction | Slow | Lower steady-state | High |
0.5 | 50% reduction | Medium | Balanced | Medium |
0.7 | 30% reduction | Fast | Higher peak | Medium |
0.8 | 20% reduction | Very fast | Maximum | Lower |
Large File Transfers:
# Optimize for throughput recovery
net.inet.tcp.cc.cubic.beta=819 # Minimal reduction
server quantum = 0x1000000 # Large quantum matches aggressive beta (16777216 bytes)
dsireadbuf = 4 # Smaller multiplier for fast recovery
Interactive Workloads:
# Optimize for stability and fairness
net.inet.tcp.cc.newreno.beta=50 # Standard reduction
server quantum = 0x200000 # Moderate quantum for responsiveness (2097152 bytes)
dsireadbuf = 8 # Higher multiplier for consistency
Network Monitoring for Beta Optimization:
#!/bin/bash
# Monitor packet loss and adjust beta accordingly (Linux only)
monitor_network_loss() {
# Get current packet loss rate
loss_rate=$(ss -i | grep -o 'retrans:[0-9]*' | cut -d: -f2 | awk '{sum+=$1} END {print sum/NR}')
if (( $(echo "$loss_rate > 5" | bc -l) )); then
# High loss: more conservative beta
sysctl net.inet.tcp.cc.cubic.beta=600
echo "High loss detected, using conservative beta=600"
else
# Low loss: more aggressive beta
sysctl net.inet.tcp.cc.cubic.beta=819
echo "Low loss detected, using aggressive beta=819"
fi
}
Matched Configuration Strategy:
# High beta requires larger buffers for effectiveness
net.inet.tcp.cc.cubic.beta=819 # Aggressive (18.1% reduction)
tcprcvbuf = 33554432 # 32 MB buffer
tcpsndbuf = 33554432
# Low beta works with smaller buffers
net.inet.tcp.cc.newreno.beta=40 # Very aggressive (60% reduction)
tcprcvbuf = 8388608 # 8 MB buffer
tcpsndbuf = 8388608
Beta Impact Testing:
#!/bin/bash
# Test different beta values systematically (Linux only)
test_beta_performance() {
local beta_value=$1
local test_file="test_10GB.bin"
# Set beta parameter
sysctl net.inet.tcp.cc.cubic.beta=$beta_value
# Allow setting to take effect
sleep 2
# Test AFP throughput
echo "Testing beta=$beta_value"
time cp $test_file /Volumes/AFPVolume/test_$beta_value.bin
# Test recovery after induced congestion
# (Advanced testing would include controlled packet loss)
}
# Test range of beta values
for beta in 500 600 717 819 900; do
test_beta_performance $beta
done
Real-time Congestion Window Analysis:
# Monitor cwnd behavior with different beta settings
while true; do
ss -i | grep -E "(cwnd|retrans)" | head -10
sleep 1
done
# Look for patterns:
# - Fast cwnd growth after loss (high beta working)
# - Stable cwnd without oscillation (appropriate beta)
# - Frequent retransmissions (beta too aggressive)
FreeBSD Parameter Interpretation:
# NewReno: beta parameter is percentage (1-100)
net.inet.tcp.cc.newreno.beta=70 # 70% of original window remains
# Actual beta = parameter / 100 = 0.7
# CUBIC: beta parameter is scaled (100-1000)
net.inet.tcp.cc.cubic.beta=717 # Scaled representation of 0.717
# Actual beta = parameter / 1000 = 0.717
Algorithm-Specific Beta Values:
# CUBIC: Built-in beta ≈ 0.7 (not directly tunable)
# NewReno: Built-in beta = 0.5 (not directly tunable)
# BBR: No beta (different congestion control approach)
# Alternative: Use tcp_no_metrics_save for reset behavior
net.ipv4.tcp_no_metrics_save=1 # Forces fresh beta application
Wireless networks present unique challenges for AFP performance due to their inherent characteristics: higher baseline latency, variable latency jitter, limited bandwidth-delay product, and susceptibility to interference. These factors require specialized tuning approaches different from wired network optimization.
Typical Wireless Performance Profile:
Baseline Latency: 5-50ms (vs 1-5ms wired)
Latency Jitter: ±10-100ms variations
Bandwidth-Delay Product: Limited due to latency overhead
Packet Loss: Higher due to RF interference
Congestion Response: Slower due to medium contention
Impact on AFP Protocol:
- DSI request/response cycles affected by latency jitter
- File browser responsiveness degraded by variable latency
- Large file transfers suffer from inefficient congestion window scaling
- Multiple clients create additional medium contention
Algorithm Selection Priority (Best to Worst for Wireless):
-
Westwood/Westwood+ (Best - Linux - However Obsolete)
- Designed specifically for wireless environments
- Distinguishes random wireless loss from congestion loss
- Uses bandwidth estimation instead of packet loss for window adjustment
- Maintains high throughput on lossy links
- Configuration:
net.ipv4.tcp_congestion_control=westwood # Enable bandwidth estimation refinements net.ipv4.tcp_westwood_enable=1
-
NewReno (Excellent - Universal - Recomended)
- Conservative loss-based algorithm, handles random loss well
- Stable and predictable performance on variable wireless conditions
- Well-tested across all wireless types (WiFi, cellular, satellite)
- Default fallback when specialized algorithms unavailable
- Configuration:
# Linux net.ipv4.tcp_congestion_control=reno # FreeBSD net.inet.tcp.cc.algorithm=newreno # FreeBSD 14+ requires cc_newreno module to be loaded net.inet.tcp.cc.newreno.beta=50 # Conservative for wireless net.inet.tcp.delayed_ack=0 # Better responsiveness net.inet.tcp.cc.newreno.beta=70 # Do not reduce throughput as much following packet loss, recomended on wireless where packet loss does not indicate congestion
-
Vegas (Good - Delay-Based - However Rarely Used)
- Uses RTT measurements to detect congestion before packet loss
- Proactive approach prevents buffer bloat in wireless equipment
- Works well on stable wireless links with consistent latency
- Less aggressive than loss-based algorithms
- Configuration:
net.ipv4.tcp_congestion_control=vegas # Fine-tune delay thresholds for wireless net.ipv4.tcp_vegas_alpha=2 # Conservative increase net.ipv4.tcp_vegas_beta=6 # Conservative decrease
-
CUBIC (Acceptable - Default)
- Default algorithm on most Linux systems
- RTT-independent window growth can be suboptimal for wireless
- Acceptable performance but not optimized for wireless characteristics
- Use only when better options unavailable
- Configuration:
net.ipv4.tcp_congestion_control=cubic # Make more conservative for wireless net.ipv4.tcp_cubic_beta=819 # Reduce aggressiveness net.ipv4.tcp_cubic_fast_convergence=0 # Disable for stability
Warning; https://lists.freebsd.org/archives/freebsd-transport/2023-April/000037.html -> https://reviews.freebsd.org/D46546?id=143400 CUBIC suffers badly on servers with poor timer sources such as virtual machines and low quality hardware, when combined with local, fast, but lossy, wireless networks. newreno freindly region window imported to CUBIC in FreeBSD 15+
-
BBR (Problematic - High-Speed Only)
- Designed for high-bandwidth, low-loss datacenter/fiber networks
- Model depends on accurate bandwidth/RTT estimation
- Can be overly aggressive on lossy wireless links
- Only suitable for very high-quality wireless (WiFi 6/7 optimal conditions)
- Not recommended for typical wireless deployments
- Configuration (use with caution):
net.ipv4.tcp_congestion_control=bbr net.core.default_qdisc=fq # Required fair queueing # Consider more conservative pacing for wireless net.ipv4.tcp_pacing_ss_ratio=150 # Reduce from default 200 net.ipv4.tcp_pacing_ca_ratio=100 # Reduce from default 120
Packet pacing is critical for wireless networks to avoid overwhelming the wireless medium and causing additional contention.
Linux Packet Pacing (with BBR):
# Fair Queueing packet scheduler (required for pacing)
net.core.default_qdisc=fq
# Enable TCP pacing
net.ipv4.tcp_pacing_ss_ratio=200 # Slow start pacing ratio
net.ipv4.tcp_pacing_ca_ratio=120 # Congestion avoidance pacing ratio
# Fine-tune pacing for wireless characteristics
echo 'fq limit 10000 flow_limit 100' > /sys/class/net/wlan0/qdisc/fq/limit
FreeBSD Packet Pacing:
# Note: FreeBSD has limited built-in packet pacing compared to Linux
# Pacing is primarily available through specific congestion control algorithms
# https://freebsdfoundation.org/our-work/journal/browser-based-edition/kernel-development/adventures-in-tcp-ip-pacing-in-the-freebsd-tcp-stack/
# For BBR (if available and compiled in):
net.inet.tcp.cc.algorithm=bbr # BBR includes built-in pacing
# BBR pacing is automatic when enabled
# Hardware-level pacing (NIC driver dependent):
# Check if your network driver supports hardware pacing:
# sysctl dev.<driver_name> | grep -i pacing
# Example for Intel drivers:
# dev.ix.0.enable_head_writeback=1 # May help with pacing
# Alternative: Use dummynet for traffic shaping/pacing
# ipfw add 100 pipe 1 tcp from any to any
# ipfw pipe 1 config bw 100Mbit/s delay 10ms
# TSO/GSO optimization for wireless (these do exist)
net.inet.tcp.tso=1 # Keep TSO enabled
# Note: TSO segment limits are driver-specific, not global sysctls
Generic Packet Pacing Considerations:
# Reduce TCP send buffer to prevent over-buffering
tcpsndbuf = 524288 # 512 KB (reduced from wired)
# Match DSI quantum to pacing rate
server quantum = 0x100000 # 1 MB for smooth pacing (1048576 bytes)
TCP timestamps are particularly valuable for wireless networks due to their ability to provide accurate RTT measurements despite jitter and enable better loss detection.
Enable TCP Timestamps (Recommended):
# Linux configuration
net.ipv4.tcp_timestamps=1 # Enable timestamps
net.ipv4.tcp_tw_reuse=1 # Reuse TIME_WAIT sockets safely
# FreeBSD configuration
net.inet.tcp.rfc1323=1 # Enable RFC1323 (includes timestamps)
net.inet.tcp.ts_offset_per_conn=1 # Per-connection offset for security
Timestamp Benefits for Wireless AFP:
- Accurate RTT measurement: Critical for proper congestion window scaling
- Better loss detection: Distinguishes losses from reordering common in wireless
- PAWS protection: Prevents wrapped sequence number issues
- Connection reuse: Faster connection establishment for repeated AFP operations
# Linux Timestamp-aware TCP socket options
net.ipv4.tcp_sack=1 # SACK works better with timestamps
net.ipv4.tcp_dsack=1 # Duplicate SACK for wireless loss patterns
# Freebsd Timestamp-aware TCP socket options
net.inet.tcp.sack.enable=1
Optimized DSI Configuration for Wireless:
# /etc/netatalk/afp.conf - Wireless-optimized section
[Global]
# Reduce server quantum for lower latency perception
server quantum = 0x100000 # 1 MB (vs 4-16 MB for wired) (1048576 bytes)
# Conservative DSI read buffer to prevent stalls
dsireadbuf = 12 # 12 * 1MB = 12MB total readahead
# Smaller TCP buffers for low BDP networks
tcprcvbuf = 1048576 # 1 MB receive buffer
tcpsndbuf = 524288 # 512 KB send buffer
# Enable splice with conservative size
splice size = 65536 # 64 KB splice operations
Memory Scaling for Wireless (Conservative):
# Calculate total memory for wireless deployment
# Formula: Clients × (server_quantum + (dsireadbuf × server_quantum) + tcprcvbuf + tcpsndbuf)
# Example for 10 wireless clients:
# 10 × (1MB + (12 × 1MB) + 1MB + 0.5MB) = 10 × 14.5MB = 145MB total
# Compared to wired high-speed (could be 1GB+), wireless is much more memory-efficient
Linux Wireless Optimization:
# Reduce TCP buffer bloat common in wireless
net.ipv4.tcp_moderate_rcvbuf=1 # Enable receive buffer auto-tuning
net.core.rmem_default=1048576 # 1 MB default receive buffer
net.core.wmem_default=524288 # 512 KB default send buffer
# Optimize for wireless latency characteristics
net.ipv4.tcp_slow_start_after_idle=0 # Don't reset cwnd after idle
net.ipv4.tcp_no_metrics_save=1 # Don't save RTT metrics (too variable)
# Wireless-friendly TCP behaviors
net.ipv4.tcp_frto=2 # Enhanced F-RTO for wireless loss patterns
net.ipv4.tcp_thin_linear_timeouts=1 # Better for interactive AFP sessions
net.ipv4.tcp_thin_dupack=1 # Handle thin streams better
FreeBSD Wireless Optimization:
# Conservative TCP parameters for wireless environments
net.inet.tcp.cc.algorithm=newreno # Better for variable wireless conditions
net.inet.tcp.cc.newreno.beta=70 # More conservative than default (0.7)
net.inet.tcp.sendspace=65536 # Smaller send buffer for wireless
net.inet.tcp.recvspace=65536 # Balanced receive buffer
# Wireless-friendly TCP behaviors
net.inet.tcp.slowstart_flightsize=4 # Conservative slow start
net.inet.tcp.local_slowstart_flightsize=4 # Local connections too
net.inet.tcp.delayed_ack=0 # Disable delayed ACK for responsiveness
# Alternative bandwidth limiting via socket buffer management
net.inet.tcp.sendbuf_inc=8192 # Conservative send buffer increment
net.inet.tcp.recvbuf_inc=16384 # Balanced receive buffer increment
# Optimize for wireless loss recovery (keep responsive)
net.inet.tcp.rexmit_min=100 # Balanced timeout (100ms for WiFi), FreeBSD default 30ms is aggressive for wireless
net.inet.tcp.rexmit_slop=300 # Higher margin for wireless timing variations (vs 200ms default)
# Note: Wireless needs MORE slop due to channel contention, power saving, interference
# Better approach: Enable advanced loss detection instead of slow timeouts
net.inet.tcp.sack.enable=1 # Selective ACK for better loss detection
net.inet.tcp.rfc1323=1 # Timestamps for accurate RTT measurement
net.inet.tcp.msl=15000 # Reduce TIME_WAIT (15s vs 30s default)
# Buffer tuning for wireless characteristics
kern.ipc.maxsockbuf=2097152 # 2MB max socket buffer (min = sendbuf_max + recvbuf_max)
net.inet.tcp.sendbuf_max=1048576 # 1MB max send buffer
net.inet.tcp.recvbuf_max=1048576 # 1MB max receive buffer
net.inet.tcp.sendbuf_auto=1 # Enable send buffer auto-tuning
net.inet.tcp.recvbuf_auto=1 # Enable receive buffer auto-tuning
# Network interface optimization for wireless serving
net.inet.ip.forwarding=0 # Disable if not routing
net.inet.tcp.path_mtu_discovery=1 # Enable PMTU discovery
net.inet.tcp.blackhole=0 # Respond to probes (wireless needs feedback)
AFP Configuration for Wireless Clients:
# afp.conf - Wireless-optimized settings
[Global]
# Reduce server quantum for lower latency perception
server quantum = 0x100000 # 1 MB (vs 4-16 MB for wired) (1048576 bytes)
# Wireless-friendly DSI parameters
dsireadbuf = 8 # Reduced readahead to avoid buffer bloat
# Optimize for Photos.app and similar latency-sensitive apps
# Note: TCP_NODELAY is hardcoded in Netatalk source (afp_dsi.c:532) - Nagle always disabled
afp read locks = no # Reduce locking overhead
Network Interface Tuning:
# Wireless interface optimization (example for iwn0)
ifconfig iwn0 txqueue 4 # Reduce TX queue depth
ifconfig iwn0 rxqueue 4 # Reduce RX queue depth
# For wired interface serving wireless clients
ifconfig em0 txcsum rxcsum tso lro # Enable hardware offload
ifconfig em0 polling # Enable polling if supported
System-Level TCP Tuning:
macOS AFP clients require specific tuning to achieve optimal performance, particularly in high-bandwidth environments. The macOS network stack, Finder integration, and application-level caching systems all impact AFP performance and can be optimized for different use cases.
# Important: macOS uses automatic buffer tuning by default (doautorcvbuf=1)
# When auto-tuning is enabled, recvspace/sendspace serve as INITIAL sizes only
# Check if auto-tuning is enabled (default=1)
sysctl net.inet.tcp.doautorcvbuf
# With auto-tuning enabled, set initial buffer sizes and limits:
sudo sysctl -w net.inet.tcp.sendspace=1048576 # 1 MB initial send buffer
sudo sysctl -w net.inet.tcp.recvspace=1048576 # 1 MB initial receive buffer (auto-tuned from here)
sudo sysctl -w net.inet.tcp.autorcvbufmax=16777216 # 16 MB max auto-tune limit (macOS parameter)
sudo sysctl -w net.inet.tcp.autosndbufmax=16777216 # 16 MB max send buffer (macOS parameter)
# Optimize TCP congestion control for client connections
sudo sysctl -w net.inet.tcp.delayed_ack=0 # Disable delayed ACK for responsiveness
# Increase MSS Default from 512 default to 1448 or 1440 (not 1460 as TCP timstamps require 12 bytes)
sudo sysctl -w net.inet.tcp.mssdflt=1448
# Set system-wide socket buffer memory limits
sudo sysctl -w kern.ipc.maxsockbuf=33554432 # 32 MB max socket buffer (for auto-tuning headroom) (min = autorcvbufmax + autosndbufmax)
Persistent TCP Configuration:
# Modern macOS requires LaunchDaemon plist files instead of /etc/sysctl.conf
# Create persistent sysctl configuration via plist (requires sudo)
sudo tee /Library/LaunchDaemons/com.afp.sysctl.plist << 'EOF'
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>com.afp.sysctl</string>
<key>Program</key>
<string>/usr/sbin/sysctl</string>
<key>ProgramArguments</key>
<array>
<string>/usr/sbin/sysctl</string>
<!-- AFP Client TCP Optimization (with auto-tuning awareness) -->
<string>kern.maxfiles=524288</string>
<string>kern.maxfilesperproc=524288</string>
<string>net.inet.tcp.mssdflt=1440</string>
<string>net.inet.tcp.sendspace=1048576</string>
<string>net.inet.tcp.recvspace=1048576</string>
<string>net.inet.tcp.autorcvbufmax=8388608</string>
<string>net.inet.tcp.autosndbufmax=8388608</string>
<string>net.inet.tcp.delayed_ack=0</string>
<string>kern.ipc.maxsockbuf=16777216</string>
<!-- High-bandwidth network optimizations -->
<string>net.inet.tcp.rfc3390=1</string>
<string>net.inet.tcp.cubic_fast_convergence=1</string>
<string>net.inet.tcp.cubic_tcp_friendliness=0</string>
<string>net.inet.tcp.win_scale_factor=6</string>
<string>net.inet.tcp.local_slowstart_flightsize=10</string>
<string>kern.ipc.nmbclusters=262144</string>
<!-- Wireless network optimizations -->
<string>net.inet.tcp.recv_allowed_iad=100</string>
</array>
<key>RunAtLoad</key>
<true/>
<key>StandardOutPath</key>
<string>/var/log/afp-sysctl.log</string>
<key>StandardErrorPath</key>
<string>/var/log/afp-sysctl.error</string>
</dict>
</plist>
EOF
# Set correct permissions
sudo chmod 644 /Library/LaunchDaemons/com.afp.sysctl.plist
sudo chown root:wheel /Library/LaunchDaemons/com.afp.sysctl.plist
# Validate and load key-value pairs
plutil /Library/LaunchDaemons/com.afp.sysctl.plist
sudo launchctl load /Library/LaunchDaemons/com.afp.sysctl.plist
# Verify the LaunchDaemon loaded successfully
sudo launchctl list | grep com.afp.sysctl
# Check logs
tail /var/log/afp-sysctl.log
tail /var/log/afp-sysctl.error
Note on macOS sysctl Persistence:
# macOS no longer supports /etc/sysctl.conf for persistent settings
# LaunchDaemon plist files are now required for system-level sysctl configuration
# This approach ensures settings persist across reboots and system updates
# To unload the configuration (if needed):
# sudo launchctl unload /Library/LaunchDaemons/com.afp.sysctl.plist
# sudo rm /Library/LaunchDaemons/com.afp.sysctl.plist
Important Note on Delayed ACK:
# For wireless networks, delayed ACK can be problematic
# net.inet.tcp.delayed_ack=3 reduces ACK frequency, causing:
# - Slower window growth in high-latency wireless environments
# - Increased sensitivity to packet loss
# - Poor interactive response for AFP browse operations
# Setting delayed_ack=0 ensures immediate ACK responses for better wireless performance
# - While Apple fixed delayed_ack implmentation some time ago, disabling delayed_ack may also still be required on Ethernet links above 1GbE
delayed_ack=0 responds after every packet (OFF)
delayed_ack=1 always employs delayed ack, 6 packets can get 1 ack
delayed_ack=2 immediate ack after 2nd packet, 2 packets per ack (Compatibility Mode)
delayed_ack=3 should auto detect when to employ delayed ack, 4 packets per ack. (DEFAULT recommended)
Apple integrated support for Greg Minshall’s “Proposed Modification to Nagle’s Algorithm” (https://datatracker.ietf.org/doc/html/draft-minshall-nagle) into the Delayed ACK feature, which fixes the issue for 10/100/1000 interfaces. This effectively enables the Nagle algorithm but prevents the unacknowledged runt packet problem causing an ACK deadlock which can unnecessarily pause transfers and cause significant delays.
However on interfaces above 1GbE (NBASE-T, 10GbE), the delayed_ack feature presents performance issues. The large majority of users will not be impacted by this behavior on 100Meg or 1Gig interfaces
Network Interface Optimization:
# Check current MTU and adjust if needed
networksetup -getMTU "Wi-Fi" # Check current MTU
sudo networksetup -setMTU "Wi-Fi" 1500 # Standard Ethernet MTU
sudo networksetup -setMTU "Ethernet" 9000 # Jumbo frames if supported
# Disable IPv6 if causing dual-stack delays
sudo networksetup -setv6off "Wi-Fi" # Disable IPv6 on Wi-Fi
sudo networksetup -setv6off "Ethernet" # Disable IPv6 on Ethernet
# Check and optimize interface queue length
netstat -I en0 # Monitor interface statistics
Client-Side Connection Parameters:
# Mount AFP volumes with optimized parameters
mount -t afp -o volsize=16777216,timeo=600 afp://server/volume /Volumes/volume
# Alternative mount with specific TCP options
mount_afp -o rsize=65536,wsize=65536,timeo=600,retrans=3 \
afp://server/volume /Volumes/volume
Connection Pool Management:
# Monitor active AFP connections
lsof -i | grep :548 # Show active AFP connections
netstat -an | grep 548 # Alternative connection view
# Check connection multiplexing
ps aux | grep -i afp # AFP-related processes
Finder Performance Optimization:
# Avoid creating .DS_Store files on network volumes
defaults write com.apple.desktopservices DSDontWriteNetworkStores -bool true
# Show all filename extensions in Finder
defaults write NSGlobalDomain AppleShowAllExtensions -bool true
# Optimize Finder view options for network browsing
defaults write com.apple.finder FXEnableExtensionChangeWarning false
# Display full POSIX path as Finder window title
defaults write com.apple.finder _FXShowPosixPathInTitle -bool true
# Disable Spotlight indexing on network volumes
sudo mdutil -a -i off # Disable all Spotlight indexing
# Or per-volume:
sudo mdutil -i off /Volumes/AFPVolume
File System Tuning:
# Increase file system cache effectiveness
sudo sysctl -w kern.maxvnodes=100000 # Increase vnode limit (macOS parameter) - Check existing value as autoset from installed RAM
Extended Attributes and Resource Forks:
# Control extended attribute handling
export COPYFILE_DISABLE=1 # Disable auto write/read ._* files by tar/cp and other built-in macos tools
# Optimize resource fork handling for network volumes
defaults write com.apple.desktopservices DSDontWriteUSBStores true
AFP-Specific Caching:
# Monitor AFP client cache effectiveness
sudo fs_usage -f filesys | grep AFP # Monitor AFP file system activity
# Clear AFP connection caches when troubleshooting
sudo dscacheutil -flushcache # Flush directory service cache
sudo killall -HUP mDNSResponder # Reset network discovery
Photos.app and Media Applications:
# Optimize Photos.app for network photo libraries
defaults write com.apple.Photos OptimizeStorage false # Disable storage optimization
defaults write com.apple.Photos NetworkOptimization true # Enable network optimizations
# Increase thumbnail cache for network media
defaults write com.apple.Preview PVImageCacheSize 268435456 # 256 MB cache
Creative Applications (Final Cut Pro, Logic Pro):
# Optimize for large media files
defaults write com.apple.FinalCutPro FFPreferredTransferSize 16777216 # 16 MB transfers
defaults write com.apple.Logic ArchiveCacheSize 536870912 # 512 MB cache
# Reduce real-time priority conflicts
sudo sysctl -w kern.sched_rt_avoid_cpu0=1 # Avoid RT scheduling conflicts
Backup and Sync Applications:
# Time Machine optimization for network volumes
sudo defaults write /Library/Preferences/com.apple.TimeMachine \
RequiresACPower -bool false # Allow battery operation
# Optimize CarbonCopyCloner/rsync for AFP
defaults write com.bombich.ccc NetworkOptimization true
Network Performance Analysis:
# Monitor AFP performance with built-in tools
sudo fs_usage -w -f network | grep afp # Monitor AFP network activity
sudo iosnoop -a | grep AFP # I/O monitoring
# Network throughput testing to AFP server
iperf3 -c afp_server_ip -P 4 # Multi-stream throughput test
nc -z afp_server_ip 548 # Test AFP port connectivity
File Transfer Performance:
# Test large file transfer performance
time cp large_file.dmg /Volumes/AFPVolume/ # Measure copy performance
time rsync -av --progress large_file.dmg /Volumes/AFPVolume/
# Monitor transfer statistics
nettop -P -l 1 | grep afp # Network top for AFP traffic
iostat -d 1 10 # I/O statistics during transfer
System Resource Monitoring:
# Monitor system resources during AFP operations
top -l 1 | grep -E "(AFP|mount_afp)" # AFP process monitoring
vm_stat 1 # Virtual memory statistics
netstat -i 1 # Network interface statistics
# Advanced monitoring with Activity Monitor alternatives
sudo powermetrics -s network --samplers network -n 10 # Detailed network metrics
Connection Issues:
# Reset network state when connections are slow
sudo dscacheutil -flushcache # Flush DNS cache
sudo killall -HUP mDNSResponder # Reset Bonjour
sudo ifconfig en0 down && sudo ifconfig en0 up # Reset interface
# Force AFP connection refresh
umount /Volumes/AFPVolume # Unmount cleanly
diskutil list | grep AFP # Verify unmount
# Remount with fresh connection
Slow Browse Performance:
# Check for problematic .DS_Store files
find /Volumes/AFPVolume -name ".DS_Store" -delete # Remove problematic files
defaults write com.apple.desktopservices DSDontWriteNetworkStores -bool true
# Optimize Finder preferences for network browsing
defaults write com.apple.finder _FXSortFoldersFirst -bool true
defaults write com.apple.finder FXDefaultSearchScope -string "SCcf"
Large File Transfer Issues:
# Check for buffer bloat and TCP issues
sudo netstat -s | grep -i retrans # Check retransmission stats
sysctl net.inet.tcp | grep -E "(recv|send)" # Check current buffer sizes
# Test with different transfer methods
scp large_file.dmg user@server:/path/ # Compare with SCP performance
rsync --progress large_file.dmg /Volumes/AFP/ # Compare with rsync
Development Environments:
# Optimize Xcode for network project storage
defaults write com.apple.dt.Xcode DVTSourceControlEnableDebugLog -bool false
defaults write com.apple.dt.Xcode NetworkFileSystemOptimization -bool true
# Reduce file system polling for development tools
defaults write com.apple.dt.Xcode FSEventFrameworkOptimization -bool true
Media Production Workflows:
Character set selection and configuration significantly impacts AFP performance, particularly for file operations involving Unicode characters, international filenames, and cross-platform compatibility. Efficient character translation between macOS clients and server storage reduces CPU overhead and improves responsiveness.
Translation Performance Hierarchy (Fastest to Slowest):
1. UTF-8 ↔ UTF-8 : No conversion (optimal)
2. UTF8-MAC ↔ UTF8-MAC : No conversion (Mac-optimized)
3. UTF-8 ↔ UTF-16 : Simple byte reordering
4. UTF8 ↔ UTF8-MAC : Unicode normalization overhead (NFC ↔ NFD)
5. UTF-8 ↔ ISO-8859-1 : Single-byte conversion
6. UTF-8 ↔ MacRoman : Moderate lookup tables
7. UTF-8 ↔ Shift-JIS : Complex multi-byte conversion
8. UTF-8 ↔ Legacy DBCS : Expensive conversion algorithms
UTF8 vs UTF8-MAC Performance Analysis:
UTF8 (Standard):
• Uses NFC (Composed) Unicode normalization
• Compatible with most Unix/Linux systems
• Minimal conversion overhead on server side
• May require normalization for Mac clients
UTF8-MAC (Mac-Optimized):
• Uses NFD (Decomposed) Unicode normalization
• Native macOS HFS+/APFS compatibility
• Eliminates Mac client-side normalization
• Slight server overhead for non-Mac compatibility
Performance Impact:
• UTF8-MAC → UTF8-MAC: ~0% overhead (optimal for Mac-only)
• UTF8 → UTF8-MAC: ~15-25% normalization penalty
• Mixed environments: UTF8-MAC recommended for Mac performance
macOS Character Set Behavior:
# macOS internally uses UTF-8 for file system operations
# HFS+ normalizes Unicode to NFD (decomposed form)
# APFS uses UTF-8 with case preservation and normalization awareness
AFP Protocol Character Translation Flow:
Client (UTF-8 NFD) → AFP Protocol → Server Storage Encoding
Recommended Server Configuration: (I run this myself, it is fast and stable - new Mac only clients)
# /etc/netatalk/afp.conf - Optimal character settings (Mac only clients)
[Global]
# Use UTF-8 for maximum performance and compatibility
vol charset = UTF8 # Volume character set
unix charset = UTF8 # Unix file system character set
mac charset = UTF8-MAC # AFP protocol character set
Character Set Verification:
# Check current Netatalk character configuration
/usr/sbin/afpd -V | grep -i charset # Display charset support
iconv --list | grep -i utf # Available charset conversions
# Verify file system character support
locale -a | grep -i UTF # System UTF-8 locales
file -i /path/to/test_file # Check file encoding
Linux Server File Systems:
# ext4 with UTF-8 support (recommended)
mkfs.ext4 -F -L "AFPVolume" /dev/sdb1
tune2fs -o user_xattr,acl /dev/sdb1 # Enable extended attributes
# XFS with UTF-8 support (high-performance alternative)
mkfs.xfs -f -L "AFPVolume" /dev/sdb1 # Native UTF-8 support
# Verify file system character handling
dumpe2fs -h /dev/sdb1 | grep features # Check ext4 UTF-8 features
xfs_info /mount/point | grep naming # Check XFS character support
FreeBSD Server File Systems:
# ZFS with UTF-8 normalization (optimal for macOS)
zfs create -o normalization=formD \
-o casesensitivity=mixed \
-o utf8only=on \
tank/afpvolume
# UFS with UTF-8 support
newfs -U -L "AFPVolume" /dev/da1p1 # Enable soft updates with UTF-8
# Verify ZFS character settings
zfs get normalization,casesensitivity,utf8only tank/afpvolume
macOS Server File Systems:
# APFS (recommended for macOS servers)
diskutil apfs createVolume disk1 APFS "AFPVolume"
# APFS natively handles UTF-8 with normalization-insensitive comparison
# HFS+ (legacy support)
diskutil eraseVolume HFS+ "AFPVolume" disk1s1
# Note: HFS+ uses NFD normalization, may require conversion
Benchmarking Character Set Performance:
#!/bin/bash
# Test character conversion overhead for different encodings
test_charset_performance() {
local charset=$1
local test_file="test_unicode_filenames.txt"
# Create test files with international characters
echo "Testing charset: $charset"
# Time character conversion operations
time iconv -f UTF-8 -t $charset $test_file > /dev/null
time iconv -f $charset -t UTF-8 $test_file > /dev/null
echo "---"
}
# Test various character sets
test_charset_performance "UTF-8" # Baseline (no conversion)
test_charset_performance "UTF-16" # Common alternative
test_charset_performance "ISO-8859-1" # Western European
test_charset_performance "MACROMAN" # Classic Mac
test_charset_performance "SHIFT_JIS" # Japanese
Character Set CPU Overhead Analysis:
# Monitor CPU usage during heavy file operations with different character sets
top -p $(pgrep afpd) & # Monitor afpd CPU usage
# Test with international filenames
for i in {1..1000}; do
touch "/volume/测试文件_$i.txt" # Chinese characters
touch "/volume/ファイル_$i.txt" # Japanese characters
touch "/volume/файл_$i.txt" # Cyrillic characters
done
# Compare performance metrics
time find /volume -name "*测试*" | wc -l # Search with Unicode
macOS to Linux Server:
# Optimal configuration for macOS ↔ Linux AFP
[Global]
vol charset = UTF8 # Linux ext4/XFS native UTF-8
unix charset = UTF8 # Avoid conversion overhead
mac charset = UTF8 # Match macOS internal UTF-8
# Enable Unicode normalization handling
vol options = upriv,usedots,invisibledots
vol dbpath = /var/lib/netatalk/CNID/$v/
[Volume]
path = /srv/afp/volume
vol charset = UTF8 # Per-volume override if needed
macOS to FreeBSD Server:
# Optimal configuration for macOS ↔ FreeBSD AFP
[Global]
vol charset = UTF8 # ZFS native UTF-8 with normalization
unix charset = UTF8
mac charset = UTF8
# ZFS-specific optimizations
vol options = upriv,usedots,tm
# Note: vol dbnametag/dbcnidtag are INVALID - not documented in afp.conf.5
# ZFS performance optimized via ZFS-specific tuning, not nonexistent AFP options
[Volume]
path = /tank/afp/volume
vol charset = UTF8
# Note: ZFS normalization=formD matches HFS+ NFD normalization
macOS to macOS Server:
# Optimal configuration for macOS ↔ macOS AFP
[Global]
vol charset = UTF8 # APFS/HFS+ native UTF-8
unix charset = UTF8
mac charset = UTF8
# macOS-specific optimizations
vol options = upriv,usedots,tm,searchdb
spotlight = yes # Enable Spotlight integration
# Note: vol dbnametag DOES NOT EXIST - invalid configuration option
[Volume]
path = /Volumes/AFPStorage/volume
vol charset = UTF8
Asian Language Support:
# Optimized for Chinese/Japanese/Korean characters
[AsianVolume]
vol charset = UTF8 # Essential for CJK character support
mac charset = UTF8 # Avoid legacy encodings like SHIFT_JIS
unix charset = UTF8
# CJK filename length considerations
# CJK characters may use 3-4 bytes per character in UTF-8
# Adjust file name length limits accordingly
vol options = upriv,usedots,longname
European Language Support:
# Optimized for European accented characters
[EuropeanVolume]
vol charset = UTF8 # Handles all European scripts
mac charset = UTF8 # Better than ISO-8859-1 for mixed content
unix charset = UTF8
# European character normalization
vol options = upriv,usedots,casefold
# Note: vol dbnametag INVALID - no such AFP configuration parameter exists
Legacy Character Set Migration:
# Migration from legacy character sets to UTF-8
# 1. Backup existing data
rsync -avH --progress /old/volume/ /backup/
# 2. Convert filenames to UTF-8
find /old/volume -depth -print0 | while IFS= read -r -d '' file; do
newname=$(echo "$file" | iconv -f MACROMAN -t UTF-8)
if [ "$file" != "$newname" ]; then
mv "$file" "$newname"
fi
done
# 3. Update AFP configuration
# Change from: vol charset = MACROMAN
# Change to: vol charset = UTF8
# 4. Restart Netatalk service
sudo systemctl restart netatalk
Character Set Performance Metrics:
# Monitor iconv performance during AFP operations
perf record -g iconv & # Profile character conversion
perf record -g afpd & # Profile AFP daemon
# Analyze character conversion bottlenecks
perf report | grep -i iconv # Character conversion hotspots
perf report | grep -i unicode # Unicode processing overhead
File Name Character Analysis:
# Analyze character distribution in file names
find /volume -type f -print0 | while IFS= read -r -d '' file; do
basename "$file" | od -c # Show character encoding
done | sort | uniq -c | sort -nr # Most common character patterns
# Identify problematic character encodings
file -bi /volume/* | grep -v "utf-8" # Find non-UTF-8 files
Compatibility Testing:
# Test file name compatibility across platforms
test_filename_compat() {
local testfile="$1"
# Create file on macOS client via AFP
touch "/Volumes/AFPVolume/$testfile"
# Verify on server storage
ls -la "/srv/afp/volume/$testfile"
# Check character encoding
echo "$testfile" | od -c
file -bi "/srv/afp/volume/$testfile"
}
# Test various character combinations
test_filename_compat "测试文件.txt" # Chinese
test_filename_compat "ファイル.txt" # Japanese
test_filename_compat "café_résumé.txt" # European accents
test_filename_compat "файл_тест.txt" # Cyrillic
Performance Optimization Checklist:
- Use UTF-8 everywhere: Client, protocol, and server storage
- Match file system encoding: Align with native file system character support
- Avoid legacy encodings: Migrate from MacRoman, ISO-8859-*, etc.
- Test international content: Validate with actual international filenames
- Monitor conversion overhead: Profile character translation performance
Compatibility Optimization Checklist:
- Normalization awareness: Handle NFD (macOS) ↔ NFC (Linux) differences
- Case sensitivity alignment: Match client and server case handling
- Extended attributes: Preserve character encoding metadata
- Length limits: Account for multi-byte character expansion
- Cross-platform validation: Test Windows, Linux, and macOS client access
This character set optimization ensures efficient translation between macOS clients and server storage while maintaining full international character support and minimizing performance overhead.
Creative Applications:
# Adobe Premiere Pro / After Effects
defaults write com.adobe.PremierePro NetworkScratchDisk /Volumes/AFPVolume/Scratch
# Optimize playback buffer sizes
defaults write com.apple.QuickTimePlayerX NetworkBufferSize 67108864 # 64 MB buffer
Office and Document Workflows:
# Optimize Microsoft Office for network storage
defaults write com.microsoft.Word NetworkOptimization -bool true
defaults write com.microsoft.Excel NetworkCaching -bool true
# Optimize iWork for network documents
defaults write com.apple.iWork.Pages NetworkDocumentOptimization -bool true
Mac Studio / Mac Pro (High-Performance Workstations):
# Utilize multiple network interfaces if available
sudo route add -net afp_server_network -interface en1 # Use specific interface
sudo networksetup -setmanual "Ethernet 2" ip netmask gateway # Dedicated AFP network
# Optimize for 10GbE connections
sudo sysctl -w net.inet.tcp.sendspace=16777216 # 16 MB for 10GbE
sudo sysctl -w net.inet.tcp.recvspace=16777216 # 16 MB receive buffer
MacBook (Mobile/Battery Considerations):
# Balance performance with battery life
sudo sysctl -w net.inet.tcp.sendspace=1048576 # 1 MB for battery savings
sudo pmset -a tcpkeepalive 0 # Disable TCP keep-alive on battery
# Optimize for wireless connections
defaults write com.apple.airport.wps NetworkOptimization -bool true
Mac mini (Server/Always-On):
# Optimize for continuous AFP usage
sudo pmset -a sleep 0 displaysleep 10 disksleep 0 # Prevent sleep
sudo sysctl -w net.inet.tcp.always_keepalive=1 # Maintain connections
# Increase connection limits for server usage
sudo sysctl -w kern.ipc.somaxconn=2048 # Increase connection backlog
This macOS client optimization covers the full spectrum from network stack tuning through application-specific configurations, ensuring optimal AFP performance across different Mac hardware and use cases.
The TCP beta
parameter is a critical tunable that controls the multiplicative decrease behavior in TCP's AIMD (Additive Increase, Multiplicative Decrease) congestion control algorithm. Understanding and properly configuring beta parameters can significantly impact AFP performance, especially during congestion events and network recovery.
The beta parameter determines how aggressively TCP reduces its congestion window (cwnd) when packet loss is detected:
New_cwnd = Current_cwnd × beta
Mathematical Relationship:
-
NewReno Default:
beta = 0.5
(50% reduction) -
CUBIC Default:
beta = 0.7
(30% reduction) - Range: 0.1 to 1.0 (10% to no reduction)
Congestion Window Behavior:
Normal Operation: cwnd += 1/cwnd per ACK (Additive Increase)
Loss Detection: cwnd = cwnd × beta (Multiplicative Decrease)
Recovery: cwnd grows from reduced value
Impact on AFP Performance:
- Higher beta (0.8-0.9): Faster recovery, more aggressive, higher throughput
- Lower beta (0.3-0.5): Conservative recovery, more stable, lower throughput
- Default beta (0.5-0.7): Balanced approach for most scenarios
NewReno Beta Configuration:
# FreeBSD NewReno beta parameter (range: 10-100, represents percentage)
net.inet.tcp.cc.newreno.beta=50 # Default: 50% reduction
net.inet.tcp.cc.newreno.beta=70 # Conservative: 30% reduction
net.inet.tcp.cc.newreno.beta=30 # Aggressive: 70% reduction
# View current setting
sysctl net.inet.tcp.cc.newreno.beta
CUBIC Beta Configuration:
# FreeBSD CUBIC beta parameter (range: 100-1000, represents 0.1-1.0)
net.inet.tcp.cc.cubic.beta=717 # Default: ~0.717 (28.3% reduction)
net.inet.tcp.cc.cubic.beta=819 # Conservative: ~0.819 (18.1% reduction)
net.inet.tcp.cc.cubic.beta=500 # Aggressive: 0.5 (50% reduction)
# View current setting
sysctl net.inet.tcp.cc.cubic.beta
Note: Linux beta parameters are typically compiled into the kernel and not directly tunable via sysctl. However, they can be modified through kernel modules or alternative congestion control algorithms.
Available Linux Controls:
# Select congestion control algorithm (affects beta behavior)
net.ipv4.tcp_congestion_control=cubic # Uses CUBIC's beta (~0.7)
net.ipv4.tcp_congestion_control=reno # Uses NewReno's beta (0.5)
net.ipv4.tcp_congestion_control=bbr # Uses different approach (no beta)
# View available algorithms
cat /proc/sys/net/ipv4/tcp_available_congestion_control
Recommended Configuration:
# FreeBSD: Higher beta for fast recovery
net.inet.tcp.cc.algorithm=cubic
net.inet.tcp.cc.cubic.beta=819 # Conservative reduction (18.1%)
# Reasoning: Fast networks can handle aggressive recovery
# - Minimal latency impact from congestion events
# - Fast link recovery allows higher beta values
# - Maximizes throughput utilization
Recommended Configuration:
# FreeBSD: Lower beta for stability
net.inet.tcp.cc.algorithm=cubic
net.inet.tcp.cc.cubic.beta=600 # Moderate reduction (40%)
# Reasoning: High-latency networks benefit from stability
# - Long RTT means slow recovery from aggressive reductions
# - Conservative beta prevents overcorrection
# - Maintains steady-state performance
Recommended Configuration:
# FreeBSD: Very conservative beta
net.inet.tcp.cc.algorithm=newreno
net.inet.tcp.cc.newreno.beta=70 # Very conservative (30% reduction)
# Reasoning: Wireless characteristics require careful handling
# - Packet loss may not indicate congestion
# - Variable latency complicates recovery timing
# - Conservative approach maintains connection stability
Beta Value | Window Reduction | Recovery Time | Throughput Impact | Stability |
---|---|---|---|---|
0.3 | 70% reduction | Slow | Lower steady-state | High |
0.5 | 50% reduction | Medium | Balanced | Medium |
0.7 | 30% reduction | Fast | Higher peak | Medium |
0.8 | 20% reduction | Very fast | Maximum | Lower |
Large File Transfers:
# Optimize for throughput recovery
net.inet.tcp.cc.cubic.beta=819 # Minimal reduction
server quantum = 0x1000000 # Large quantum matches aggressive beta (16777216 bytes)
dsireadbuf = 4 # Smaller multiplier for fast recovery
Interactive Workloads:
# Optimize for stability and fairness
net.inet.tcp.cc.newreno.beta=50 # Standard reduction
server quantum = 0x200000 # Moderate quantum for responsiveness (2097152 bytes)
dsireadbuf = 8 # Higher multiplier for consistency
Network Monitoring for Beta Optimization:
#!/bin/bash
# Monitor packet loss and adjust beta accordingly
monitor_network_loss() {
# Get current packet loss rate
loss_rate=$(ss -i | grep -o 'retrans:[0-9]*' | cut -d: -f2 | awk '{sum+=$1} END {print sum/NR}')
if (( $(echo "$loss_rate > 5" | bc -l) )); then
# High loss: more conservative beta
sysctl net.inet.tcp.cc.cubic.beta=600
echo "High loss detected, using conservative beta=600"
else
# Low loss: more aggressive beta
sysctl net.inet.tcp.cc.cubic.beta=819
echo "Low loss detected, using aggressive beta=819"
fi
}
Matched Configuration Strategy:
# High beta requires larger buffers for effectiveness
net.inet.tcp.cc.cubic.beta=819 # Aggressive (18.1% reduction)
tcprcvbuf = 33554432 # 32 MB buffer
tcpsndbuf = 33554432
# Low beta works with smaller buffers
net.inet.tcp.cc.newreno.beta=40 # Very aggressive (60% reduction)
tcprcvbuf = 8388608 # 8 MB buffer
tcpsndbuf = 8388608
Beta Impact Testing:
#!/bin/bash
# Test different beta values systematically
test_beta_performance() {
local beta_value=$1
local test_file="test_10GB.bin"
# Set beta parameter
sysctl net.inet.tcp.cc.cubic.beta=$beta_value
# Allow setting to take effect
sleep 2
# Test AFP throughput
echo "Testing beta=$beta_value"
time cp $test_file /Volumes/AFPVolume/test_$beta_value.bin
# Test recovery after induced congestion
# (Advanced testing would include controlled packet loss)
}
# Test range of beta values
for beta in 500 600 717 819 900; do
test_beta_performance $beta
done
Real-time Congestion Window Analysis:
# Monitor cwnd behavior with different beta settings
while true; do
ss -i | grep -E "(cwnd|retrans)" | head -10
sleep 1
done
# Look for patterns:
# - Fast cwnd growth after loss (high beta working)
# - Stable cwnd without oscillation (appropriate beta)
# - Frequent retransmissions (beta too aggressive)
FreeBSD Parameter Interpretation:
# NewReno: beta parameter is percentage (1-100)
net.inet.tcp.cc.newreno.beta=70 # 70% of original window remains
# Actual beta = parameter / 100 = 0.7
# CUBIC: beta parameter is scaled (100-1000)
net.inet.tcp.cc.cubic.beta=717 # Scaled representation of 0.717
# Actual beta = parameter / 1000 = 0.717
Algorithm-Specific Beta Values:
# CUBIC: Built-in beta ≈ 0.7 (not directly tunable)
# NewReno: Built-in beta = 0.5 (not directly tunable)
# BBR: No beta (different congestion control approach)
# Alternative: Use tcp_no_metrics_save for reset behavior
net.ipv4.tcp_no_metrics_save=1 # Forces fresh beta application
# Conservative wireless TCP settings
net.inet.tcp.slowstart_flightsize=8 # Slow start with small initial window (not MacOS)
net.inet.tcp.local_slowstart_flightsize=8 # Apply to local connections too
# Wireless-aware timeouts
net.inet.tcp.keepinit=45000 # 45s initial keep-alive
net.inet.tcp.keepidle=600000 # 10 minutes idle time
net.inet.tcp.keepintvl=75000 # 75s between keep-alives
# Reduce memory pressure from many small wireless connections
net.inet.tcp.recvspace=1048576 # 1 MB default receive space
net.inet.tcp.sendspace=524288 # 512 KB default send space
Browse Performance Enhancement:
# Reduce directory enumeration overhead
# Directory performance optimized by using SSD for 'vol dbpath' and system-level caching
# Reduce protocol chatter
guest account = nobody # Avoid unnecessary auth rounds
Mobile Device Considerations:
# iOS/macOS devices often sleep network interfaces
# Increase tolerance for connection interruptions
[iOS-Mobile]
dsi keepalive = 300 # 5 minutes for mobile devices
tcp keepalive = 1 # Essential for sleeping devices
# Optimize for Photos.app and similar high-latency-sensitive apps
server quantum = 0x80000 # 512 KB for responsiveness (524288 bytes)
dsireadbuf = 8 # Reduced readahead for mobile
Wireless-Specific Metrics:
# Monitor wireless interface statistics
iw dev wlan0 station dump # Client connection quality
cat /proc/net/wireless # Signal strength and noise
# AFP performance with wireless characteristics
ss -i | grep -E "(rtt|cwnd|bytes_in_flight)" # Monitor congestion window
netstat -i wlan0 | grep -E "(drop|error)" # Interface error rates
# DSI protocol performance under jitter
tcpdump -i wlan0 port 548 -c 100 | grep -E "(request|response)"
Latency Jitter Impact Analysis:
# Measure AFP response time variability
ping -c 100 afp_server | awk '/^64 bytes/{print $7}' | cut -d'=' -f2 > latencies.txt
sort -n latencies.txt | awk '{
latencies[NR] = $1
}
END {
mean = (latencies[int(NR*0.5)] + latencies[int(NR*0.5)+1]) / 2
p95 = latencies[int(NR*0.95)]
print "Median:", mean "ms, 95th percentile:", p95 "ms"
print "Jitter impact: " (p95 - mean) "ms additional delay"
}'
Common Wireless Performance Issues:
-
High latency spikes: Monitor with
ping -c 1000
to identify jitter patterns -
Stalled transfers: Check for buffer bloat with
ss -i
showing large bytes_in_flight (Linux) - Connection drops: Verify keep-alive settings and wireless power management
- Poor interactive response: Reduce server quantum and enable TCP timestamps
TCP Diagnostics:
# Check for wireless medium contention
iwconfig wlan0 # Basic wireless info
iw dev wlan0 scan | grep -E "(BSS|freq)" # Nearby networks causing interference
# Analyze TCP behavior - Total for all TCP events (Very useful for confirming what is happening to TCP)
ss -i | grep retrans # Linux
netstat -s -p tcp # FreeBSD
sudo netstat -s -p tcp # MacOS
# Analyze TCP behavior - Realtime during tests
systat -t tcp 1 # FreeBSD
nettop # MacOS (rx_ooo = Rx Out Of Order, rx_dupe = Rx Duplicates, re-tx = TCP Retransmits)
# Per Application traffic
nettop -P -L 1000 # MacOS
# AFP connection stability
lsof -i :548 | wc -l # Number of active AFP connections
netstat -an | grep 548 | grep TIME_WAIT # Check for connection churn
This wireless optimization approach recognizes that wireless networks require fundamentally different tuning strategies focused on stability and latency tolerance rather than maximum throughput optimization.
# Watch for retransmissions (indicates algorithm stress)
netstat -s | grep -i retrans
# Monitor fairness among multiple AFP sessions
ss -tuln | grep :548 | wc -l # Active AFP connections
Enterprise LAN (Low Latency, High Bandwidth):
- Algorithm: NewReno
- Reason: Stable, fair, predictable performance
- Use case: Multiple concurrent users, mixed workloads
WAN/Internet (High Latency):
- Algorithm: CUBIC
- Reason: RTT independence, aggressive bandwidth utilization
- Use case: Remote access, branch office connections
Data Center (Ultra High-Speed):
- Algorithm: BBR (Linux) or CUBIC (FreeBSD)
- Reason: Optimal for extreme bandwidth scenarios
- Use case: Bulk data transfers, backup operations
This algorithm selection directly impacts how effectively Netatalk can utilize available network capacity while maintaining stability and fairness among multiple AFP clients.
# TCP connection and retransmission tuning
net.inet.tcp.keepidle=600000 # TCP keepalive idle time (ms)
net.inet.tcp.keepintvl=15000 # TCP keepalive interval (ms)
net.inet.tcp.keepcnt=3 # TCP keepalive probe count
net.inet.tcp.finwait2_timeout=60000 # FIN_WAIT_2 timeout (ms)
# TCP fast recovery
net.inet.tcp.sack.enable=1 # Enable SACK
net.inet.tcp.syncookies=1 # Enable SYN cookies for protection
net.inet.tcp.fast_finwait2_recycle=1 # Fast TIME_WAIT recycling
# Memory allocation and limits
kern.maxproc=40960 # Maximum processes
kern.maxprocperuid=32768 # Maximum processes per user
kern.maxfiles=204800 # System-wide file descriptor limit
kern.maxfilesperproc=32768 # Per-process file descriptor limit
# Virtual memory tuning
vm.swap_enabled=1 # Enable swap (but minimize usage)
vm.defer_swapspace_pageouts=1 # Defer swap pageouts
vm.disable_swapspace_pageouts=0 # Allow swap pageouts when needed
# Buffer cache tuning
vfs.hibufspace=134217728 # 128 MB high buffer space threshold
vfs.lobufspace=67108864 # 64 MB low buffer space threshold
vfs.bufcache=10 # Buffer cache percentage of RAM
# ZFS-specific tuning (if using ZFS)
vfs.zfs.arc_max=8589934592 # 8 GB ARC maximum (adjust for system)
vfs.zfs.arc_min=2147483648 # 2 GB ARC minimum
vfs.zfs.prefetch_disable=0 # Enable ZFS prefetch
vfs.zfs.txg.timeout=5 # Transaction group timeout (seconds)
# Network interface tuning
net.inet.ip.intr_queue_maxlen=2048 # IP interrupt queue length
net.inet.ip.process_options=0 # Skip IP options processing
net.inet.ip.redirect=0 # Disable ICMP redirects
net.inet.ip.sourceroute=0 # Disable IP source routing
# Interrupt and polling tuning
kern.polling.enable=0 # Disable polling (use interrupts)
net.inet.tcp.delayed_ack=0 # Disable delayed ACK for performance
net.inet.udp.checksum=1 # Enable UDP checksums
# Apply immediately (for current session)
sysctl net.inet.tcp.recvbuf_max=268435456
sysctl kern.ipc.nmbclusters=262144
# For persistent settings, add to /etc/sysctl.conf and reboot
# or use service sysctl restart
# Verify settings
sysctl net.inet.tcp.recvbuf_max
sysctl kern.ipc.nmbclusters
# IRQ affinity for network interfaces (bind to specific CPUs)
echo 4 > /proc/irq/24/smp_affinity # Bind network IRQ to CPU 2
echo 8 > /proc/irq/25/smp_affinity # Bind network IRQ to CPU 3
# Network RPS (Receive Packet Steering) for multi-core scaling
echo f > /sys/class/net/eth0/queues/rx-0/rps_cpus # Use CPUs 0-3 for RPS
# TCP congestion control algorithms (choose best for environment)
# bbr (Google's BBR) - excellent for high-speed, high-latency networks
# cubic (default) - good general purpose
# htcp - good for high-speed networks
# vegas - good for low-latency networks
# Network interface tuning (in /etc/rc.conf)
# For Intel interfaces (em, igb, ixgbe)
ifconfig_em0="inet 192.168.1.100 netmask 255.255.255.0 rxcsum txcsum tso lro"
# Receive and transmit descriptor ring sizes
# Add to /boot/device.hints or /boot/loader.conf
hw.em.rxd=4096 # RX descriptor ring size
hw.em.txd=4096 # TX descriptor ring size
hw.igb.num_queues=8 # Multi-queue support
# FreeBSD jails resource limits (if using jails)
security.jail.param.allow.raw_sockets=1 # Allow raw sockets in jails
# XFS mount options (recommended for large files)
mount -o noatime,largeio,inode64,swalloc,logbsize=256k /dev/sda1 /srv/afp
# ext4 mount options (good general purpose)
mount -o noatime,data=writeback,barrier=0,journal_async_commit /dev/sda1 /srv/afp
# Btrfs mount options (modern features, good for snapshots)
mount -o noatime,compress=zstd,space_cache=v2,autodefrag /dev/sda1 /srv/afp
# ZFS dataset creation (if using ZFS on Linux)
zfs create -o atime=off -o compression=lz4 -o recordsize=1M tank/afp
# UFS mount options
mount -o noatime,async,softdep /dev/ada0s1a /srv/afp
# ZFS dataset creation (native FreeBSD ZFS)
zfs create -o atime=off -o compression=lz4 -o recordsize=1M zpool/afp
zfs set primarycache=all zpool/afp # Cache both data and metadata
zfs set secondarycache=all zpool/afp # L2ARC for both data and metadata
ZFS tuning requires careful alignment with Netatalk's DSI parameters and network characteristics for optimal end-to-end performance.
Critical Principle: ZFS recordsize
should align with Netatalk's server quantum
for optimal I/O efficiency.
# Alignment examples based on server quantum settings:
# 1 Gbps configuration (server quantum = 0x400000 = 4MB)
zfs set recordsize=1M tank/afp # 1MB records for 4MB quantum
# Ratio: 4:1 - quantum fits exactly into 4 ZFS records
# 10 Gbps configuration (server quantum = 0x1000000 = 16MB)
zfs set recordsize=1M tank/afp # 1MB records for 16MB quantum
# Ratio: 16:1 - quantum fits exactly into 16 ZFS records
# High-performance configuration (server quantum = 0x4000000 = 64MB)
zfs set recordsize=2M tank/afp # 2MB records for 64MB quantum
# Ratio: 32:1 - quantum fits exactly into 32 ZFS records
# Rule: recordsize should be server_quantum / (8 to 64) for optimal alignment
ARC tuning must account for Netatalk's buffer usage to avoid memory pressure:
# Calculate total system memory allocation:
# ARC + (clients × (server_quantum + dsireadbuf × server_quantum)) + OS overhead
# Example for 64GB system with 10 Gbps configuration:
# 50 clients × 144MB = 7.2GB (Netatalk buffers)
# 8GB OS overhead
# Available for ARC: 64 - 7.2 - 8 = 48.8GB
vfs.zfs.arc_max=52428800000 # 48.8 GB ARC maximum
vfs.zfs.arc_min=10737418240 # 10 GB ARC minimum (20% of max)
vfs.zfs.arc_meta_limit=13107200000 # 12.2 GB metadata limit (25% of max)
vfs.zfs.arc_meta_min=2684354560 # 2.5 GB metadata minimum
# ARC efficiency monitoring:
# High hit rates (>90%) indicate good cache sizing
# Meta hit rates should be >95% for directory-heavy AFP workloads
ZIL performance is critical for AFP write operations and directory updates:
# Dedicated SLOG (Separate Intent Log) device configuration:
# Use high-performance NVMe for synchronous writes
zpool add tank log mirror /dev/nvme0n1 /dev/nvme1n1
# ZIL-specific tuning:
vfs.zfs.zil_slog_bulk=1048576 # 1MB bulk threshold for SLOG
vfs.zfs.zil_slog_limit=1073741824 # 1GB SLOG space limit
# Sync write behavior (critical for AFP):
# AFP uses synchronous writes for metadata consistency
vfs.zfs.txg.timeout=1 # 1 second TXG timeout (low latency)
vfs.zfs.dirty_data_max=8589934592 # 8GB dirty data limit
vfs.zfs.dirty_data_sync_pct=20 # Sync at 20% dirty data
# Monitor ZIL performance:
# zpool iostat -v tank 1 # Watch SLOG device utilization
# High SLOG bandwidth indicates heavy AFP write activity
L2ARC extends cache for large AFP file sets that exceed ARC:
# Add L2ARC devices (use fast SSDs):
zpool add tank cache /dev/sda /dev/sdb
# L2ARC tuning aligned with network bandwidth:
vfs.zfs.l2arc_write_max=268435456 # 256MB/s L2ARC write rate (10 Gbps)
vfs.zfs.l2arc_write_boost=536870912 # 512MB/s initial boost rate
vfs.zfs.l2arc_headroom=8 # 8x headroom for prefetch
vfs.zfs.l2arc_feed_again=1 # Re-feed L2ARC during scans
# L2ARC sizing calculation:
# L2ARC should be 5-10x ARC size for optimal hit rates
# 48GB ARC → 240-480GB L2ARC capacity
# Monitor L2ARC effectiveness:
# arc_summary.py - check L2ARC hit rates
# Target: >80% L2ARC hit rate for cached data
Compression reduces storage I/O but increases CPU usage:
# Compression algorithm selection based on network speed:
# 1 Gbps networks (network is bottleneck):
zfs set compression=gzip-6 tank/afp # High compression, CPU cycles available
# 10 Gbps networks (balanced):
zfs set compression=lz4 tank/afp # Fast compression, good ratio
# 40+ Gbps networks (CPU becomes bottleneck):
zfs set compression=off tank/afp # No compression, maximize CPU for network
# Monitor compression effectiveness:
zfs get compressratio tank/afp # Should be >1.5x for gzip, >1.2x for lz4
Use NVMe special devices for metadata acceleration:
# Add special allocation class for metadata:
zpool add tank special mirror /dev/nvme2n1 /dev/nvme3n1
# Configure metadata allocation:
zfs set special_small_blocks=32K tank/afp # Store blocks <32K on special devices
# This accelerates:
# - Directory listings (crucial for AFP browse performance)
# - Extended attributes (AFP resource forks)
# - Small file I/O (common in AFP workloads)
# Monitor special device utilization:
zpool iostat -v tank 1 # Watch special device IOPS
Coordinate ZFS prefetch with Netatalk's dsireadbuf
mechanism:
# ZFS prefetch tuning:
vfs.zfs.prefetch_disable=0 # Enable ZFS prefetch
vfs.zfs.prefetch.max_distance=134217728 # 128MB max prefetch distance
vfs.zfs.prefetch.array_rd_sz=16777216 # 16MB array read size
# Alignment with dsireadbuf:
# If dsireadbuf=8 and server_quantum=16MB:
# Total Netatalk readahead = 8 × 16MB = 128MB
# ZFS prefetch should be similar: max_distance=128MB
# This prevents double-buffering and cache competition
TXG timing affects write latency and bandwidth utilization:
# Network-aligned TXG tuning:
# 1 Gbps networks (latency tolerant):
vfs.zfs.txg.timeout=30 # 30 second TXG timeout
# Maximizes batching, reduces random I/O
# 10 Gbps networks (balanced):
vfs.zfs.txg.timeout=5 # 5 second TXG timeout
# Balance between latency and batching
# 40+ Gbps networks (latency sensitive):
vfs.zfs.txg.timeout=1 # 1 second TXG timeout
# Minimize write latency for real-time workloads
# Monitor TXG efficiency:
# Regular TXG intervals indicate good balance
# Irregular intervals suggest I/O pressure
RAID-Z vs. Mirror selection based on network and workload characteristics:
# Network speed vs. storage layout recommendations:
# 1 Gbps networks:
# RAID-Z2 or RAID-Z3 - Network is bottleneck, favor capacity
zpool create tank raidz2 /dev/sd[a-f]
# 10 Gbps networks:
# Mirrors or RAID-Z1 - Balance performance and capacity
zpool create tank mirror /dev/sda /dev/sdb mirror /dev/sdc /dev/sdd
# 40+ Gbps networks:
# Mirrors only - Maximize random I/O performance
zpool create tank mirror /dev/sd[a-b] mirror /dev/sd[c-d] mirror /dev/sd[e-f]
# Add high-speed devices for write-heavy workloads:
zpool add tank log mirror /dev/nvme0n1 /dev/nvme1n1 # ZIL
zpool add tank cache /dev/nvme2n1 /dev/nvme3n1 # L2ARC
zpool add tank special mirror /dev/nvme4n1 /dev/nvme5n1 # Metadata
Complete system optimization requires coordinated tuning:
graph TD
A[AFP Client Request] --> B[Network Layer<br/>tcprcvbuf: 16MB<br/>TCP congestion control]
B --> C[Netatalk DSI Layer<br/>server_quantum: 16MB<br/>dsireadbuf: 6x = 96MB]
C --> D[ZFS ARC Cache<br/>48GB primary cache<br/>L2ARC: 240GB SSD]
D --> E{Cache Hit?}
E -->|Yes 90%| F[Memory → Network<br/>sendfile zero-copy]
E -->|No 10%| G[ZFS Storage Layer<br/>recordsize: 1MB<br/>compression: lz4]
G --> H[Special Devices<br/>Metadata: NVMe<br/>32KB blocks]
G --> I[Data Pool<br/>Mirrors for 10 Gbps<br/>RAID-Z for 1 Gbps]
H --> J[ZIL for Sync Writes<br/>NVMe SLOG<br/>1 second TXG timeout]
I --> J
J --> C
# Key ZFS metrics for AFP performance:
# ARC efficiency:
arc_summary.py | grep -E "Hit Rate|Miss Rate"
# Target: >90% hit rate for primary ARC
# L2ARC performance:
arc_summary.py | grep -A5 "L2 ARC"
# Target: >80% hit rate for L2ARC reads
# ZIL utilization:
zpool iostat -v tank 1
# Watch log device bandwidth during writes
# Prefetch effectiveness:
kstat.zfs.misc.prefetch.hits
kstat.zfs.misc.prefetch.misses
# Target: >70% prefetch hit rate
# Transaction group timing:
dtrace -n 'txg-commit { printf("%Y: TXG %d committed\n", walltimestamp, arg0) }'
# Should show regular intervals matching txg.timeout
# Compression efficiency:
zfs get compressratio tank/afp
# Should be >1.2x for lz4, >1.5x for gzip
# Record size analysis:
zdb -dddd tank/afp | grep "block size"
# Verify alignment with server quantum
1 Gbps Configuration:
zfs set recordsize=512K tank/afp # Smaller records for 4MB quantum
zfs set compression=gzip-6 tank/afp # CPU available for compression
vfs.zfs.txg.timeout=30 # Longer TXG for batching
vfs.zfs.arc_max=16GB # Smaller ARC, network limited
10 Gbps Configuration:
zfs set recordsize=1M tank/afp # 1MB records for 16MB quantum
zfs set compression=lz4 tank/afp # Fast compression
vfs.zfs.txg.timeout=5 # Balanced TXG timing
vfs.zfs.arc_max=48GB # Large ARC for caching
40+ Gbps Configuration:
zfs set recordsize=2M tank/afp # Large records for 64MB quantum
zfs set compression=off tank/afp # No compression, CPU for network
vfs.zfs.txg.timeout=1 # Fast TXG for low latency
vfs.zfs.arc_max=64GB # Maximum caching
This end-to-end alignment ensures optimal data flow from storage through ZFS caching layers, Netatalk buffering, TCP networking, and final delivery to AFP clients.
# Verify network buffer settings
cat /proc/sys/net/core/rmem_max
cat /proc/sys/net/ipv4/tcp_rmem
# Check TCP congestion control
cat /proc/sys/net/ipv4/tcp_congestion_control
# Monitor network performance
ss -i # Show TCP information
netstat -s | grep -i retrans # Check retransmission stats
# Verify network buffer settings
sysctl net.inet.tcp.recvbuf_max
sysctl kern.ipc.nmbclusters
# Check mbuf usage
netstat -m
# Monitor network performance
sockstat -l # Show listening sockets
netstat -s -p tcp | grep retrans # Check retransmission stats
# Linux: Pin Netatalk processes to specific NUMA nodes
numactl --cpunodebind=0 --membind=0 /usr/sbin/afpd
numactl --cpunodebind=1 --membind=1 /usr/sbin/cnid_metad
# FreeBSD: CPU affinity using cpuset
cpuset -l 0-3 /usr/local/sbin/afpd # Pin to CPUs 0-3
cpuset -l 4-7 /usr/local/sbin/cnid_metad # Pin to CPUs 4-7
# Linux: Multi-queue network interface setup
ethtool -L eth0 combined 8 # Enable 8 queues
ethtool -C eth0 rx-usecs 20 tx-usecs 20 # Interrupt coalescing
# FreeBSD: Interface queue setup
ifconfig igb0 rxcsum txcsum tso4 tso6 lro # Enable hardware offloading
graph TD
A[Performance Issue] --> B{CPU < 80%?}
B -->|Yes| C{Memory Available?}
B -->|No| D[Increase server quantum<br/>Reduce client count]
C -->|Yes| E{Network Utilization?}
C -->|No| F[Reduce dsireadbuf<br/>Reduce quantum size]
E -->|Low| G[Check storage I/O<br/>Disk bandwidth/IOPS]
E -->|High| H[Increase TCP buffers<br/>Check network infrastructure]
- Baseline measurement with default settings
- Increase server quantum progressively (2MB → 4MB → 8MB → 16MB)
- Adjust dsireadbuf to maintain reasonable memory usage
- Tune TCP buffers based on bandwidth-delay product
- Monitor system resources (CPU, memory, network, storage)
- Test with realistic workloads (large files, many small files, mixed)
- Throughput: MB/s sustained transfer rates
- Latency: Response times for file operations
- CPU utilization: System and user time
- Memory pressure: Available memory, swap usage
- Network utilization: Interface bandwidth usage
- Storage I/O: Disk throughput and IOPS
# Network throughput monitoring
iftop -i eth0 -n -P
# System resource monitoring
htop
iostat -x 1
ss -tuln | grep :548
# AFP-specific monitoring (if available)
lsof -u afpd | wc -l # Active connections
Symptoms: OOM killer
, malloc failed
, slow performance
Solutions:
- Reduce
dsireadbuf
to 4-6 - Lower
server quantum
to 2-8 MB - Decrease
tcprcvbuf
/tcpsndbuf
- Limit concurrent client connections
Symptoms: High CPU usage, slow response times Solutions:
- Increase
server quantum
to reduce system call overhead - Enable
use sendfile
for zero-copy transfers - Check for inefficient storage I/O patterns
- Consider CPU affinity for network interrupts
Symptoms: Low network usage despite client demand Solutions:
- Increase
server quantum
(primary bottleneck) - Raise
tcprcvbuf
/tcpsndbuf
for high-latency links - Verify TCP congestion control settings
- Check for network infrastructure bottlenecks
Symptoms: High disk wait time, low network utilization Solutions:
- Optimize filesystem mount options (
noatime
,largeio
) - Increase storage bandwidth (RAID, SSD)
- Enable
sendfile
to reduce disk → memory → network copies - Consider read-ahead/write-behind caching
For multi-socket systems, ensure network interrupts and afpd processes run on the same NUMA node:
# Pin network interrupts to specific CPU cores
echo 4 > /proc/irq/24/smp_affinity # Pin to CPU 2
echo 8 > /proc/irq/25/smp_affinity # Pin to CPU 3
# Run afpd with NUMA affinity
numactl --cpunodebind=0 --membind=0 /usr/sbin/afpd
# Enable multi-queue for network interface
ethtool -L eth0 combined 8
# Distribute interrupts across CPU cores
for i in {0..7}; do
echo $((2**$i)) > /proc/irq/$((24+$i))/smp_affinity
done
When running in containers (Docker, LXC), ensure proper resource allocation:
# Docker Compose example
version: '3'
services:
netatalk:
image: netatalk:latest
deploy:
resources:
limits:
memory: 4G
cpus: '2'
sysctls:
- net.core.rmem_max=134217728
- net.core.wmem_max=134217728
volumes:
- /srv/afp:/srv/afp:Z
- Large file transfers (1-10 GB files)
- Small file operations (metadata intensive)
- Mixed workload (concurrent large/small operations)
- Multi-client scenarios (realistic connection counts)
# Built-in Netatalk performance test
make -C test/testsuite
./test/testsuite/speedtest -h <server> -s <volume> -d 1024 -q 16384
# Network bandwidth measurement
iperf3 -c <server> -t 30 -P 8
# File system performance
fio --name=randwrite --ioengine=libaio --rw=randwrite \
--bs=4k --size=1G --numjobs=4 --runtime=30
High-performance AFP over DSI requires careful tuning of multiple interrelated parameters. The key is to:
- Start with network speed-appropriate base configurations
- Monitor system resources during tuning
- Test with realistic workloads
- Adjust parameters iteratively based on bottleneck analysis
Properly tuned, Netatalk can achieve 80-95% of theoretical network bandwidth on modern high-speed networks while maintaining stability and reasonable resource consumption.
For networks exceeding 10 Gbps or requiring extreme optimization, consider additional factors like:
- SR-IOV network virtualization
- DPDK user-space networking
- NVMe storage optimization
- Custom kernel tuning for specific workloads
This configuration guide provides the foundation for achieving optimal AFP performance in modern high-speed networking environments.
Resources
OS Specific Guides
- Installing Netatalk on Alpine Linux
- Installing Netatalk on Debian Linux
- Installing Netatalk on Fedora Linux
- Installing Netatalk on FreeBSD
- Installing Netatalk on macOS
- Installing Netatalk on NetBSD
- Installing Netatalk on OmniOS
- Installing Netatalk on OpenBSD
- Installing Netatalk on OpenIndiana
- Installing Netatalk on openSUSE
- Installing Netatalk on Solaris
- Installing Netatalk on Ubuntu
Technical Docs
- CatalogSearch
- Kerberos
- Special Files and Folders
- Spotlight
- AppleTalk Kernel Module
- Print Server
- MacIP Gateway
- MySQL CNID Backend
Development