🚀 Building Production-Ready Security: My Journey Optimizing a Rust Blog with Axum
🚀 Building Production-Ready Security: My Journey Optimizing a Rust Blog with Axum
A deep dive into rate limiting, intelligent caching, and modern Rust patterns for web applications
Recently, I've been working on hardening my blog, which I built with Rust and Axum for production deployment on Fly.io. What started as a simple blog quickly evolved into exploring some cool Rust patterns and security best practices. Here's what I learned along the way! 🦀
🎯 The Challenge: Production-Ready Security & Performance
Every request costs money when you're running a web application in the cloud. More importantly, you must protect against various attack vectors that could drain your wallet or compromise your application. I wanted to implement:
- Rate limiting to prevent DDoS attacks
- Intelligent view counting to reduce database load
- Static content caching for better performance
- Request timeouts and size limits for resource protection
- Modern Rust patterns using the standard library
Let me walk you through the solutions I implemented!
🛡️ Rate Limiting with tower_governor
First up was implementing rate limiting. The tower_governor
crate provides an elegant solution that integrates beautifully with Axum's middleware system:
use tower_governor::{governor::GovernorConfigBuilder, GovernorLayer, key_extractor::SmartIpKeyExtractor};
// In the router configuration
.layer(
GovernorLayer {
config: Arc::new(
GovernorConfigBuilder::default()
.per_second(1) // 1 request per second baseline
.burst_size(30) // Allow bursts up to 30 requests
.key_extractor(SmartIpKeyExtractor) // Smart IP detection
.finish()
.unwrap(),
),
}
)
What I love about this approach is how SmartIpKeyExtractor
automatically handles various proxy headers (X-Forwarded-For, CF-Connecting-IP, etc.) that you typically encounter in production deployments. The burst configuration allows legitimate users to have smooth experiences while protecting against abuse.
🧠 Intelligent View Count Caching
One of the most interesting challenges was optimizing view counting. Every article view was triggering multiple database operations:
-- This was happening on EVERY view
SELECT COUNT(*) FROM post_views WHERE post_id = ? AND ip_address = ? AND user_agent = ?
INSERT INTO post_views (post_id, ip_address, user_agent, viewed_at) VALUES (?, ?, ?, datetime('now'))
UPDATE posts SET views = views + 1 WHERE id = ?
I implemented a time-based cache using tokio::sync::RwLock
to reduce database writes dramatically:
use tokio::sync::RwLock;
use std::time::Instant;
pub struct ViewCache {
cache: RwLock<HashMap<String, Instant>>,
}
impl ViewCache {
pub async fn should_count_view(&self, ip_address: &str, post_id: &str, user_agent: &str) -> bool {
let cache_key = format!("{}:{}:{}", ip_address, post_id, user_agent);
let mut cache = self.cache.write().await;
let now = Instant::now();
// Clean up old entries to prevent memory leaks
cache.retain(|_, &mut last_time| {
now.duration_since(last_time) < std::time::Duration::from_secs(600)
});
if let Some(&last_view) = cache.get(&cache_key) {
// Don't count view if less than 5 minutes since last view
if now.duration_since(last_view) < std::time::Duration::from_secs(300) {
return false;
}
}
cache.insert(cache_key, now);
true
}
}
This approach reduces database writes significantly while maintaining accurate view counts. The automatic cleanup prevents memory leaks, and the 5-minute cooldown balances accuracy and performance.
🏗️ Modern Rust: Goodbye lazy_static, Hello OnceLock
One of my favorite improvements was migrating from lazy_static
to std::sync::OnceLock
. This eliminates external dependencies while providing cleaner, more performant static initialization:
use std::sync::OnceLock;
// Before: External dependency
lazy_static! {
static ref YOUTUBE_REGEX: Regex = Regex::new(r#"..."#).unwrap();
}
// After: Standard library goodness
static YOUTUBE_REGEX: OnceLock<Regex> = OnceLock::new();
fn get_youtube_regex() -> &'static Regex {
YOUTUBE_REGEX.get_or_init(|| {
Regex::new(r#"..."#).unwrap()
})
}
The OnceLock
pattern is thread-safe, lazy, and happens exactly once. It's perfect for expensive-to-compute static values like compiled regexes.
🎭 The AppState Pattern: Dependency Injection Done Right
Rather than relying on global state, I integrated the view cache into Axum's AppState
pattern. This makes testing easier and follows Rust's ownership principles:
#[derive(Clone)]
pub struct AppState {
pub db_pool: SqlitePool,
pub template_env: Arc<Tera>,
pub config: Arc<AppConfig>,
pub view_cache: Arc<ViewCache>, // 👈 Managed state
}
impl AppState {
pub fn new(db_pool: SqlitePool, template_env: Arc<Tera>, config: Arc<AppConfig>) -> Self {
Self {
db_pool,
template_env,
config,
view_cache: Arc::new(ViewCache::new()),
}
}
}
// In handlers, explicit dependencies
pub async fn increment_view_count(
pool: &sqlx::SqlitePool,
view_cache: &ViewCache, // 👈 Explicit dependency
post_id: &str,
ip_address: &str,
user_agent: &str
) -> Result<(), AppError> {
if !view_cache.should_count_view(ip_address, post_id, user_agent).await {
return Ok(());
}
// ... database operations only when necessary
}
This pattern makes dependencies explicit, improves testability, and follows Rust's "fearless concurrency" principle by making shared state obvious.
📦 Static Content Optimization with ServiceBuilder
For static assets, I implemented proper HTTP caching using tower's ServiceBuilder
pattern:
use tower::ServiceBuilder;
use tower_http::set_header::SetResponseHeaderLayer;
// Static files with aggressive caching
.nest_service("/static",
ServiceBuilder::new()
.layer(SetResponseHeaderLayer::overriding(
header::CACHE_CONTROL,
HeaderValue::from_static("public, max-age=31536000, immutable"),
))
.service(ServeDir::new("static"))
)
// Images with shorter cache duration
.nest_service("/data",
ServiceBuilder::new()
.layer(SetResponseHeaderLayer::overriding(
header::CACHE_CONTROL,
HeaderValue::from_static("public, max-age=86400"), // 1 day
))
.service(ServeDir::new("data"))
)
The ServiceBuilder
pattern lets you layer middleware in a type-safe way, and the compiler ensures everything fits together correctly.
⏱️ Resource Protection: Timeouts and Limits
Finally, I added multiple layers of protection against resource exhaustion:
// Middleware chain optimized for security
.layer(TimeoutLayer::new(std::time::Duration::from_secs(30))) // Request timeout
.layer(RequestBodyLimitLayer::new(50 * 1024 * 1024)) // 50MB request limit
.layer(DefaultBodyLimit::max(10 * 1024 * 1024)) // 10MB upload limit
These layers work together to prevent various attack vectors:
- TimeoutLayer prevents slow-loris attacks
- RequestBodyLimitLayer prevents memory exhaustion from large payloads
- DefaultBodyLimit specifically protects file upload endpoints
📊 The Results
The performance improvements were substantial:
- Database Load: ~85% reduction in view count writes
- Response Times: Faster static asset delivery through browser caching
- Resource Usage: Better memory and CPU utilization through request limits
- Security Posture: Protection against DDoS, slow-loris, and payload attacks
🔬 Cool Rust Features Utilized
Throughout this project, I leveraged several modern Rust features:
std::sync::OnceLock
- Thread-safe lazy static initializationtokio::sync::RwLock
- Async-aware reader-writer locksArc<T>
sharing - Zero-cost shared ownership across async boundaries- Type-safe middleware composition - Tower's
ServiceBuilder
pattern - Explicit dependency injection - Through Axum's
State
extractor
🎓 Key Takeaways
Building production-ready Rust web applications taught me several important lessons:
- Embrace explicit dependencies over global state - it makes testing and reasoning about code much easier
- The standard library is powerful -
OnceLock
eliminated an external dependency while improving performance - Async Rust primitives are mature -
RwLock
andArc
make concurrent programming straightforward - Layer your defenses - Multiple middleware layers provide defense in depth
- Performance optimization often improves security - Caching reduces attack surface area
🚀 What's Next?
This foundation opens up interesting possibilities for future enhancements:
- Distributed caching with Redis for multi-instance deployments
- Metrics collection for monitoring and alerting
- Content Security Policy headers for additional security
- Database connection pooling optimizations