Skip to content

Max crawl depth not working #475

@sasa-andjelic-nqode

Description

@sasa-andjelic-nqode

When I start a crawl without setting the max crawl depth, the crawler scans for links on all pages.
But when I set the max crawl depth to some int value (for example 100), the crawler only scans the first layer of links and crawls those.

I took a peek into Crawler->addToDepthTree() implementation and I see that if the max depth is null any url is instanciated as node. The rest of the conditions inside the method implementation do not cover a case for child links.
I also didn't notice any increment for current depth of the link.

It looks like max depth was never implemented completely. Is it that or am I perhaps missing something obvious which I should pay attention to?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions