FastSSV Logging System¶
FastSSV includes a comprehensive logging system for debugging, monitoring, and performance analysis. This guide covers all aspects of logging configuration and usage.
Table of Contents¶
- Quick Start
- Configuration
- Log Levels
- Log Formats
- CLI Logging
- Python API Logging
- Performance Tracking
- Production Best Practices
- Examples
Quick Start¶
Console Logging (Default)¶
By default, FastSSV logs to stderr with INFO level:
Enable Debug Logging¶
# CLI argument
fastssv query.sql --log-level DEBUG
# Or environment variable
export FASTSSV_LOG_LEVEL=DEBUG
fastssv query.sql
Log to File¶
# CLI argument
fastssv query.sql --log-file logs/validation.log
# Or environment variable
export FASTSSV_LOG_FILE=logs/validation.log
fastssv query.sql
Configuration¶
FastSSV logging can be configured via:
- CLI Arguments (highest priority)
- Environment Variables
- Default Values (lowest priority)
Environment Variables¶
Set these in your .env file or shell:
# Log level
FASTSSV_LOG_LEVEL=INFO
# Log file path (optional)
FASTSSV_LOG_FILE=logs/fastssv.log
# Log format
FASTSSV_LOG_FORMAT=detailed
CLI Arguments¶
Override environment variables:
Log Levels¶
FastSSV supports standard Python log levels:
| Level | Description | Use Case |
|---|---|---|
| DEBUG | Detailed diagnostic information | Development, troubleshooting |
| INFO | General informational messages | Production monitoring (default) |
| WARNING | Warning messages | Potential issues |
| ERROR | Error messages | Application errors |
| CRITICAL | Critical errors | System failures |
What's Logged at Each Level¶
DEBUG: - Rule execution details (each rule) - Rule selection logic - Timing for each rule execution - Internal state changes
INFO: - Validation start/completion - Query count and results - Total violations found - File I/O operations - Performance metrics (when enabled)
WARNING: - Configuration issues - Deprecated feature usage - Non-fatal problems
ERROR: - SQL parsing failures - File I/O errors - Validation exceptions
Log Formats¶
Simple Format¶
Minimal output for human readability:
Output:
INFO: Starting validation: 245 characters, dialect=postgres
INFO: Validation complete: 154 rules, 2 errors, 3 warnings
Detailed Format (Default)¶
Includes timestamps and logger names:
Output:
2026-04-20 19:30:15 - fastssv.cli - INFO - Starting validation: 245 characters, dialect=postgres
2026-04-20 19:30:15 - fastssv - INFO - Running all 154 rules
2026-04-20 19:30:15 - fastssv - INFO - Validation complete: 154 rules, 2 errors, 3 warnings
JSON Format¶
Structured logs for machine parsing and log aggregation:
Output:
{"timestamp": "2026-04-20 19:30:15", "level": "INFO", "logger": "fastssv.cli", "message": "Starting validation: 245 characters, dialect=postgres"}
{"timestamp": "2026-04-20 19:30:15", "level": "INFO", "logger": "fastssv", "message": "Running all 154 rules"}
{"timestamp": "2026-04-20 19:30:15", "level": "INFO", "logger": "fastssv", "message": "Validation complete: 154 rules, 2 errors, 3 warnings", "violation_count": 5}
Benefits:
- Easy parsing with tools like jq
- Compatible with log aggregation systems (Elasticsearch, Splunk, etc.)
- Structured fields for filtering and analysis
CLI Logging¶
Basic Usage¶
# Default INFO logging to console
fastssv query.sql
# Debug logging
fastssv query.sql --log-level DEBUG
# Log to file
fastssv query.sql --log-file logs/validation.log
# JSON logs for production
fastssv query.sql --log-format json --log-file logs/validation.json
What's Logged¶
The CLI logs:
- Input Processing:
- SQL file path or stdin reading
- SQL length in characters
-
Number of queries detected
-
Configuration:
- Dialect detection
- Strict mode status
-
Rule selection
-
Validation:
- Validation start
- Progress for multiple queries
- Violation counts per query
-
Performance metrics
-
Output:
- Report file path
- Final results summary
Multi-Query Logging¶
When validating multiple queries:
Output:
2026-04-20 19:30:15 - fastssv.cli - INFO - Read SQL input: 5432 characters from queries.sql
2026-04-20 19:30:15 - fastssv.cli - INFO - Split into 25 queries
2026-04-20 19:30:15 - fastssv.cli - INFO - Processing 25 queries individually
2026-04-20 19:30:15 - fastssv.cli - INFO - Query 1: VALID (0 errors, 2 warnings, 45.23ms)
2026-04-20 19:30:15 - fastssv.cli - INFO - Query 2: INVALID (1 errors, 0 warnings, 38.91ms)
...
2026-04-20 19:30:16 - fastssv.cli - INFO - Batch validation complete: 25 queries, 15 valid, 10 invalid, 1250.45ms total
Python API Logging¶
Setup Logging in Code¶
from fastssv.core.logging import setup_logging, get_logger
# Configure logging
logger = setup_logging(
level="DEBUG",
log_file="logs/my_app.log",
log_format="json"
)
# Or use existing logger
from fastssv import validate_sql_structured
logger.info("Starting SQL validation")
violations = validate_sql_structured(sql, dialect="postgres")
logger.info(f"Found {len(violations)} violations")
Module-Specific Loggers¶
Get a logger for your module:
from fastssv.core.logging import get_logger
logger = get_logger(__name__) # e.g., "my_app.validation"
logger.debug("Processing query")
logger.info("Validation complete")
logger.warning("Potential issue detected")
logger.error("Validation failed")
JSON-formatted timing data¶
The CLI's log_validation_complete and log_rule_execution helpers
already attach duration_ms as a structured field on the log record.
With FASTSSV_LOG_FORMAT=json, those values appear in the structured
output:
{
"timestamp": "2026-04-20 19:30:15",
"level": "INFO",
"logger": "fastssv.cli",
"message": "Validation complete: 154 rules, 2 errors, 3 warnings",
"duration_ms": 125.45,
"violation_count": 5
}
Production Best Practices¶
1. Log Level¶
Use INFO in production, DEBUG for troubleshooting:
2. Log Format¶
Use JSON for production (easier to parse):
3. Log Rotation¶
Use logrotate or similar tools to manage log files:
# /etc/logrotate.d/fastssv
/var/log/fastssv/*.log {
daily
rotate 7
compress
delaycompress
missingok
notifempty
}
4. Log Aggregation¶
Collect logs in centralized systems:
- Elasticsearch + Kibana: Search and visualize logs
- Splunk: Enterprise log management
- CloudWatch/Stackdriver: Cloud-native logging
Example: Send JSON logs to Elasticsearch:
fastssv query.sql --log-format json | \
while read line; do
curl -X POST "http://localhost:9200/fastssv-logs/_doc" \
-H 'Content-Type: application/json' \
-d "$line"
done
5. Performance Monitoring¶
The CLI emits duration_ms as a structured field on validation-complete
and per-rule log records (see log_validation_complete /
log_rule_execution in core.logging). Pair with JSON output to
query for slow validations:
Examples¶
Example 1: Debug Mode with File Logging¶
Example 2: Production JSON Logging¶
export FASTSSV_LOG_LEVEL=INFO
export FASTSSV_LOG_FORMAT=json
export FASTSSV_LOG_FILE=logs/production.json
fastssv batch_queries.sql
Example 3: Python API with Custom Logger¶
import time
from fastssv import validate_sql_structured
from fastssv.core.logging import setup_logging
# Setup
logger = setup_logging(level="INFO", log_format="json")
# Validate with timing
sql = "SELECT * FROM condition_occurrence WHERE condition_concept_id = 201826;"
start = time.perf_counter()
violations = validate_sql_structured(sql, dialect="postgres")
duration_ms = (time.perf_counter() - start) * 1000
logger.info(
f"Validation complete: {len(violations)} violations",
extra={"violation_count": len(violations), "duration_ms": round(duration_ms, 2)},
)
Example 4: Parsing JSON Logs with jq¶
# Count violations by severity
cat logs/fastssv.json | \
jq -r 'select(.violation_count) | .level' | \
sort | uniq -c
# Find slowest rules
cat logs/fastssv.json | \
jq -r 'select(.rule_id and .duration_ms) | "\(.duration_ms)\t\(.rule_id)"' | \
sort -rn | head -10
# Extract all errors
cat logs/fastssv.json | \
jq 'select(.level == "ERROR")'
Troubleshooting¶
No Logs Appearing¶
- Check log level: Use
DEBUGto see more output - Check file permissions if logging to file
- Verify environment variables:
echo $FASTSSV_LOG_LEVEL
Too Much Log Output¶
- Increase log level to
WARNINGorERROR - Filter specific loggers in production
Log File Growing Too Large¶
- Implement log rotation (see Production Best Practices)
- Use JSON format and stream to log aggregation system
- Reduce log level in production
Related Documentation¶
- HTTP API — the FastAPI service uses the same logging stack and emits
sql_hash(never the SQL body) per validation - JSON output — the CLI's structured report format, distinct from the log stream
- Plugin system — how to add log calls inside a custom rule