Skip to content

Conversation

@google-labs-jules
Copy link
Contributor

This change updates the project's documentation to be more user-friendly for new users. The main README.md has been updated with installation instructions and a clearer "Getting Started" section. The "Getting Started" tutorial in the documentation site has been restructured to clearly explain and provide examples for the three main use cases: single model evaluation, model comparison, and population comparison, mirroring the structure of the R package's documentation.


PR created automatically by Jules for task 17743907015561309458 started by @uriahf

Updates the project's documentation to be more user-friendly for new users.

- The main `README.md` has been updated with installation instructions, a clearer "Getting Started" section, and links to the blog and official documentation. The code example has been corrected to use the proper dictionary format for the `reals` parameter.
- The "Getting Started" tutorial (`docs/tutorials/getting_started.qmd`) has been restructured to clearly explain and provide examples for the three main use cases: single model evaluation, model comparison, and population comparison. This new structure is inspired by the documentation for the R version of `rtichoke`.
@google-labs-jules
Copy link
Contributor Author

👋 Jules, reporting for duty! I'm here to lend a hand with this pull request.

When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down.

I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job!

For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with @jules. You can find this option in the Pull Request section of your global Jules UI settings. You can always switch back!


For security, I will only act on instructions from the user who triggered this task.

New to Jules? Learn more at jules.google/docs.

Updates the project's documentation to be more user-friendly for new users.

- The main `README.md` has been updated with installation instructions, a clearer "Getting Started" section, and links to the blog and official documentation.
- The "Getting Started" tutorial (`docs/tutorials/getting_started.qmd`) has been restructured to clearly explain and provide examples for the three main use cases: single model evaluation, model comparison, and population comparison.
- All code examples in both the README and the tutorial now use more realistic and intuitive sample data where model predictions are clearly correlated with outcomes, making the visualizations more meaningful.
@uriahf
Copy link
Owner

uriahf commented Dec 24, 2025

the good model doesn't make any sense, it's random.

@google-labs-jules
Copy link
Contributor Author

the good model doesn't make any sense, it's random.

Thank you for the feedback. You are correct, the initial examples were not clear. I have updated the code in my latest push to use more realistic and intuitive data for all the examples in both the README and the tutorial, where the 'Good Model' now clearly performs better than random.

Refines the code examples in the README and "Getting Started" tutorial to use more realistic and intuitive sample data. This addresses feedback that the previous "good model" was indistinguishable from a random one. The new examples now clearly demonstrate a model with predictive power.
@uriahf uriahf marked this pull request as ready for review December 24, 2025 17:26
@uriahf
Copy link
Owner

uriahf commented Dec 24, 2025

That's a better example, don't use "good" "bad" jargon

Sample data from the dcurves_example.py script

probs_dict = {
"Marker": np.array([
0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.1, 0.2, 0.3, 0.4, 0.5,
0.6, 0.7, 0.8, 0.9, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9
]),
"Marker2": np.array([
0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.1, 0.2, 0.3, 0.4, 0.5,
0.6, 0.7, 0.8, 0.9, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9
])
}
reals = np.array([
1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1
])

@google-labs-jules
Copy link
Contributor Author

That's a better example, don't use "good" "bad" jargon

Sample data from the dcurves_example.py script

probs_dict = {
"Marker": np.array([
0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.1, 0.2, 0.3, 0.4, 0.5,
0.6, 0.7, 0.8, 0.9, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9
]),
"Marker2": np.array([
0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.1, 0.2, 0.3, 0.4, 0.5,
0.6, 0.7, 0.8, 0.9, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9
])
}
reals = np.array([
1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1
])

That's a great suggestion. I've updated the documentation to remove the 'good'/'bad' jargon and replaced it with neutral names like 'Model A' and 'Model B'. Thank you for the feedback!

Removes subjective jargon like "good" and "bad" from the model names in the documentation. The examples now use neutral, descriptive names like "Model A" and "Model B" for clarity and professionalism.
@uriahf
Copy link
Owner

uriahf commented Dec 24, 2025

Don't use np.random.rand at all

@google-labs-jules
Copy link
Contributor Author

Don't use np.random.rand at all

Excellent point. I've removed all calls to np.random.rand and replaced them with small, hardcoded datasets. This makes the examples fully reproducible and easier to understand. Thanks for the guidance!

Replaces all calls to `np.random.rand` in the documentation with small, hardcoded datasets. This makes all code examples fully reproducible, deterministic, and easier for new users to understand at a glance.
@uriahf uriahf merged commit 725e7d7 into main Dec 24, 2025
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants