๐Ÿ“Š Performance Benchmarks

Comprehensive performance analysis of Rush Shell components and operations

โšก
20
Benchmark Categories
โฑ๏ธ
4.20s
Total Execution Time
๐Ÿ“ˆ
210ms
Average per Benchmark

๐Ÿ“‹ Latest Benchmark Report

๐Ÿš€ Rush Shell Performance Benchmark Report

๐Ÿ“ˆ Performance Analysis

Performance Distribution

Fast (<10ms)< /span> 5 benchmarks
Medium (10-100ms) 2 benchmarks
Slow (โ‰ฅ100ms) 13 benchmarks

Fastest Components

Lexer (Basic Tokens) 379,929 ops/sec
Parser (Basic Commands) 100,020 ops/sec
Case Statements 1,787 ops/sec

Optimization Opportunities

External Commands 155 ops/sec
Complex Pipelines 168 ops/sec
Script Execution 231 ops/sec

๐Ÿงช Benchmark Categories

๐Ÿ“

Lexer Benchmarks

Tokenization performance for various command types

  • Basic tokenization (simple commands)
  • Complex tokenization (quotes, variables, expansions)
  • Large input tokenization
๐Ÿ”

Parser Benchmarks

AST construction speed for complex structures

  • Basic command parsing
  • Complex structure parsing (if/for/while/case)
  • Function definition parsing
โšก

Executor Benchmarks

Command execution performance

  • Built-in command execution
  • External command execution
  • Variable operations
๐Ÿ”„

Expansion Benchmarks

Variable and arithmetic expansion speed

  • Variable expansion performance
  • Arithmetic expansion performance
  • Command substitution performance
๐Ÿ—๏ธ

Control Structure Benchmarks

If/for/while/case statement performance

  • If statement execution
  • Loop execution (for/while)
  • Case statement execution
๐Ÿ”—

Pipeline Benchmarks

Pipe and redirection performance

  • Simple pipeline execution
  • Complex pipeline execution
  • Pipeline with redirections

๐ŸŽฏ Running Benchmarks

Complete Benchmark Suite

# Run from benchmarks directory
cd benchmarks
mkdir -p target
cargo run --bin rush-benchmark

# Copy report to docs
cp target/benchmark_report.html ../docs/benchmark_report.html

Executes 20+ benchmark scenarios covering all major components

View HTML Report

# Serve locally from benchmarks directory
cd benchmarks
python3 -m http.server 8000 -d target/

# Visit http://localhost:8000/benchmark_report.html

Visual report with detailed metrics and performance analysis

JSON Results

# Machine-readable results from benchmarks directory
cat benchmarks/target/benchmark_results.json

Structured data for CI/CD integration and trend analysis

๐Ÿ“Š Performance Monitoring

๐Ÿ”

Regression Detection

Track performance changes over time and identify regressions during development

โšก

Optimization Validation

Verify performance improvements and ensure no regressions in execution speed

๐Ÿ”„

CI/CD Integration

Automated performance testing in build pipelines for continuous monitoring

๐Ÿ“ˆ

Historical Tracking

JSON export enables trend analysis and long-term performance monitoring

๐Ÿ’ก Recommendations

โš ๏ธ

Performance Optimization

Some benchmarks are running slowly. Consider optimizing the slowest components identified in the benchmark report.

๐Ÿ’ก

Regular Monitoring

Run benchmarks regularly to track performance trends over time and catch regressions early.

๐Ÿ”

Focus Areas

Focus optimization efforts on the slowest benchmarks identified in the detailed results above.