Skip to content

Add performance benchmarks for large configurations #202

@jtdub

Description

@jtdub

Description

The project has pytest-profiling in dev dependencies but no dedicated benchmarks. For a library that processes network configurations — which can be thousands of lines on large devices — having baseline performance data would be valuable.

Areas where performance characteristics are unknown or potentially interesting:

  • Parsing large configs (10,000+ lines) via _load_from_string_lines() vs get_hconfig_fast_load()
  • config_to_get_to() on configs with many differences
  • all_children_sorted() on deeply nested hierarchies (repeated sorted() calls)
  • HConfigChildren.rebuild_mapping() cost on frequent deletions
  • Memory usage for large config trees (benefit of __slots__)

Proposed Improvement

  1. Add a benchmarks/ directory or pytest benchmark fixtures
  2. Create representative large config samples (or generators)
  3. Measure parsing, diffing, and iteration performance
  4. Document expected performance characteristics in docs

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions