๐Ÿ“Š Performance Benchmarks

Comprehensive performance analysis of Rush Shell components and operations

โšก
20
Benchmark Categories
โฑ๏ธ
8.00s
Total Execution Time
๐Ÿ“ˆ
400ms
Average per Benchmark

๐Ÿ“‹ Latest Benchmark Report

๐Ÿš€ Rush Shell Performance Benchmark Report

๐Ÿ“ˆ Performance Analysis

Performance Distribution

Fast (<10ms) 5 benchmarks
Medium (10-100ms) 2 benchmarks
Slow (โ‰ฅ100ms) 13 benchmarks

Fastest Components

Lexer (Basic Tokens) 409,562 ops/sec
Parser (Basic Commands) 102,706 ops/sec
Case Statements 1,893 ops/sec

Optimization Opportunities

Complex Pipelines 21 ops/sec
Script Execution 251 ops/sec
Command Substitution 270 ops/sec

๐Ÿงช Benchmark Categories

๐Ÿ“

Lexer Benchmarks

Tokenization performance for various command types

  • Basic tokenization (simple commands)
  • Complex tokenization (quotes, variables, expansions)
  • Large input tokenization
๐Ÿ”

Parser Benchmarks

AST construction speed for complex structures

  • Basic command parsing
  • Complex structure parsing (if/for/while/case)
  • Function definition parsing
โšก

Executor Benchmarks

Command execution performance

  • Built-in command execution
  • External command execution
  • Variable operations
๐Ÿ”„

Expansion Benchmarks

Variable and arithmetic expansion speed

  • Variable expansion performance
  • Arithmetic expansion performance
  • Command substitution performance
๐Ÿ—๏ธ

Control Structure Benchmarks

If/for/while/case statement performance

  • If statement execution
  • Loop execution (for/while)
  • Case statement execution
๐Ÿ”—

Pipeline Benchmarks

Pipe and redirection performance

  • Simple pipeline execution
  • Complex pipeline execution
  • Pipeline with redirections

๐ŸŽฏ Running Benchmarks

Complete Benchmark Suite

# Run from repository root
cargo run -p rush-benchmarks

Executes 20+ benchmark scenarios covering all major components

View HTML Report

# Serve locally
python3 -m http.server 8000 -d target/

# Visit http://localhost:8000/benchmark_report.html

Visual report with detailed metrics and performance analysis

JSON Results

# Machine-readable results
cat target/benchmark_results.json

Structured data for CI/CD integration and trend analysis

๐Ÿ“Š Performance Monitoring

๐Ÿ”

Regression Detection

Track performance changes over time and identify regressions during development

โšก

Optimization Validation

Verify performance improvements and ensure no regressions in execution speed

๐Ÿ”„

CI/CD Integration

Automated performance testing in build pipelines for continuous monitoring

๐Ÿ“ˆ

Historical Tracking

JSON export enables trend analysis and long-term performance monitoring

๐Ÿ’ก Recommendations

โš ๏ธ

Performance Optimization

Some benchmarks are running slowly. Consider optimizing the slowest components identified in the benchmark report.

๐Ÿ’ก

Regular Monitoring

Run benchmarks regularly to track performance trends over time and catch regressions early.

๐Ÿ”

Focus Areas

Focus optimization efforts on the slowest benchmarks identified in the detailed results above.