-
Notifications
You must be signed in to change notification settings - Fork 103
Description
It would be great if questionary would have an easy way to test input flows
For example (reusing my example flow from #34):
from questionary import Separator, test_prompt
questions = [
{
"type": "confirm",
"name": "conditional_step",
"message": "Would you like the next question?",
"default": True,
},
{
"type": "text",
"name": "next_question",
"message": "Name this library?",
"when": lambda x: x['conditional_step'],
"validate": lambda val: val == "questionary"
},
{
"type": "select",
"name": "second_question",
"message": "Select item",
"choices": [
"item1",
"item2",
Separator(),
"other",
],
},
{
"type": "text",
"name": "second_question",
"message": "Insert free text",
"when": lambda x: x["second_question"] == "other"
},
]
inputs = ["Yes", "questionary", "other", "free text something"]
vals = test_prompt(questions, inputs)
assert vals['second_question'] == "free text something"
. . .
Now by calling test_prompt() with an input string (or List - which is probably easier to compile) we can run through the whole workflow, and verify that all keys are populated as expected.
This would allow proper CI for more complex flows ... which base one input on top of the other as in the question-flow above.
I suspect it would be possible by mocking some internals of questionary - but i see this as a dangerous approach as every minor change in these mocked functions would probably lead to an error in my tests.
Most of the code / logic should already be available as part of the tests for questionary - however that's not available when installing from pypi...