Skip to content

Environment setup issues #4

@DrozdikGleb

Description

@DrozdikGleb

Hello, thank you for the open dataset and very useful paper

I've encountered some issues with setting up the environment. In some projects, it's not enough to just install the dependencies from requirements, therefore, some samples fail the tests due to an incorrectly set up environment.
For example, in the project nlm-ingestor, it's necessary to download dictionaries from nltk. In the project camp_zipnerf, the new version of scipy.linalg lacks the tril function. In ollama, there are missing pytest_httpserver and pillow. After fixing some of these issues, the pass@1 for gpt-4 increased from 20.73 (as in your article) to 26.54%.

I also wanted to ask why in EvoCodeBench you didn't consider app-specific imports, siblings, and similar names files, as was done in DevEval. It seems that at least app-specific imports should be useful.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions