|
EnigmaMachineCore 0.1.0
A modular Enigma Machine simulation in C++20
|
EnigmaMachineCore uses the Google Benchmark framework to provide high-resolution timing and throughput metrics for its cryptographic engine.
The benchmarking suite is designed to measure:
Benchmarking requires Google Benchmark. The build system automatically downloads and configures it via CMake's FetchContent when enabled.
By default, benchmarks are disabled to keep the core build fast. You must explicitly enable them:
Note: It is highly recommended to run benchmarks in Release mode for accurate results.
To ensure the executable can find the necessary configuration files, you should run it from the benchmarks/ directory within your build folder:
The benchmark output provides several key metrics:
KeyTransform benchmarks, this represents characters processed per second (e.g., 10.5M/s).Google Benchmark supports several useful flags:
./EnigmaBenchmark --benchmark_filter=BM_Rotor./EnigmaBenchmark --benchmark_format=json > results.json./EnigmaBenchmark --benchmark_list_testsThe following reference metrics were established for the v1.0 release to guide optimization and prevent regressions:
| Metric | Reference Value (v1.0) | Description |
|---|---|---|
| Initialization Peak Heap | ~27.7 KB | Total heap memory used during configuration loading and machine setup. |
| Hot-Path Allocations | 1 per character | Number of dynamic memory allocations during keyTransform. |
| Peak Stack Depth | ~450 Bytes | Maximum stack space used by the transformation call chain. |
| Throughput (3 Rotors) | ~7.7 MiB/s | Average encryption speed for 128 KB messages (Release Build). |
| Rotor Transform Latency | ~5.5 ns | Average time to process a character through a single rotor. |
| Rotor Rotate Latency | ~3.6 ns | Overhead of the mechanical stepping logic per rotor. |
Two official baselines are maintained to ensure accuracy across different execution contexts:
docs/benchmarks/baseline_v1.0.json):**docs/benchmarks/baseline_ci_v1.0.json):**Benchmarks are automatically executed on every Pull Request via GitHub Actions. The current performance is compared against the CI Baseline (baseline_ci_v1.0.json). Results are uploaded as artifacts for manual review if the automated check fails.
When optimizing the "hot path" of the engine (e.g., Rotor::transform), avoid comparing your local results directly against the repository's baseline files, as differences in hardware (CPU frequency, cache, etc.) will produce misleading results.
Instead, follow this "A/B" workflow on your local machine:
main branch, build in Release, and save results: bash ./EnigmaBenchmark --benchmark_out=base.json --benchmark_out_format=json bash ./EnigmaBenchmark --benchmark_out=new.json --benchmark_out_format=json bash python3 scripts/compare_benchmarks.py base.json new.json The CI environment enforces a 5% regression threshold against the CI Baseline.
EnigmaMachineCore uses gcov and lcov to provide code coverage metrics. This helps identify untested code paths and ensures high quality for the cryptographic engine.
| Metric | Target | Description |
|---|---|---|
| Overall Coverage | 75% | All library code (core + utilities) |
| Core Crypto Coverage | 95% | Rotor, Reflector, Transformer, PlugBoard, EnigmaMachine |
The following files are considered "core crypto" for coverage analysis:
RotorBox/src/Rotor.cppRotorBox/src/Reflector.cppRotorBox/src/Transformer.cppRotorBox/src/RotorBox.cppPlugBoard/src/PlugBoard.cppEnigmaMachine/src/EnigmaMachine.cppAfter generating the reports:
The coverage reports show:
Code coverage is automatically run on Pull Requests via GitHub Actions. The workflow uses a 4-job pipeline:
Coverage reports are:
Thresholds: 75% (full library), 95% (core crypto). Below thresholds trigger warnings but don't fail the build.
See .github/workflows/test-and-coverage.yml for details.