Skip to content

Conversation

@smartinez-yatiribio
Copy link

Motivation and Context

dataProcess() function, when using numOfCores > 1, was copying the input data to all the worker processes. That becomes a real problem when working with a very large input dataset.
In addition, the parallel processing loop was done over individual proteins, instead of over chunks of data.
The proposed change, using packages foreach and doParallel, splits the data into chunks and only sends one chunk to each of the workers.

Changes

Please provide a detailed bullet point list of your changes.

  • library(foreach), library(doParallel)
  • do not export 'input' to the Cluster
  • use foreach::foreach and %dopar% to send the chunks of input to each cluster worker.

Testing

Please describe any unit tests you added or modified to verify your changes.

Checklist Before Requesting a Review

  • I have read the MSstats contributing guidelines
  • My changes generate no new warnings
  • Any dependent changes have been merged and published in downstream modules
  • I have run the devtools::document() command after my changes and committed the added files

@coderabbitai
Copy link

coderabbitai bot commented Sep 23, 2025

Important

Review skipped

Auto reviews are disabled on base/target branches other than the default branch.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

✨ Finishing touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

👮 Agentic pre-merge checks are now available in preview!

Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.

  • Built-in checks – Quickly apply ready-made checks to enforce title conventions, require pull request descriptions that follow templates, validate linked issues for compliance, and more.
  • Custom agentic checks – Define your own rules using CodeRabbit’s advanced agentic capabilities to enforce organization-specific policies and workflows. For example, you can instruct CodeRabbit’s agent to verify that API documentation is updated whenever API schema files are modified in a PR. Note: Upto 5 custom checks are currently allowed during the preview period. Pricing for this feature will be announced in a few weeks.

Please see the documentation for more information.

Example:

reviews:
  pre_merge_checks:
    custom_checks:
      - name: "Undocumented Breaking Changes"
        mode: "warning"
        instructions: |
          Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal).

Please share your feedback with us on this Discord post.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@smartinez-yatiribio smartinez-yatiribio changed the title Release 3 21 Fixing memory problems for dataProcess using multicore Sep 23, 2025
@smartinez-yatiribio
Copy link
Author

Associated with issue #177

@tonywu1999
Copy link
Contributor

tonywu1999 commented Sep 24, 2025

@smartinez-yatiribio thanks for putting up this PR, we've also noticed these memory issues too.

I noticed a lot of the changes here are simply spacing / tabbing differences. I'm going to pull your changes locally and re-push only the files that are involved in parallel processing and then review the changes after that.

I also will look to modify the PR to point to the devel branch (since the release branch is for bug fixes and I would consider this more of a feature request / feature upgrade)

@smartinez-yatiribio
Copy link
Author

Ok, sounds good.
Regarding the spacing, I just followed the guidelines you have about it, doing the default styler::style(), but I saw that changed everything.
I tried to do it in devel, but it wasn't working for me, I guess some other packages needed to be used in devel branch too?

@smartinez-yatiribio
Copy link
Author

I wanted to update this issue saying that the last comit I just did, the better and simpler approach, using just data.table parallel processing, seems to work better.
The previous solution I proposed, had too much memory overhead when splitting the input table into subtables (X200). I didn't noticed yesterday.
This new solution is simply using:
summarized_results <- input[, .(out = summarizer_fun(.SD, impute, censored_symbol, remove50missing)), by = PROTEIN]
where

summarizer_fun <- switch(method,
                           "TMP" = MSstatsSummarizeSingleTMP,
                           MSstatsSummarizeSingleLinear
  )

Please, I had problems with linear summarization, but maybe is because I wasn't using the latest version. I am confused about that.

@tonywu1999
Copy link
Contributor

tonywu1999 commented Sep 29, 2025

@smartinez-yatiribio

I did some reading and I'm a little skeptical of the data.table approach based on this issue. Data.table's parallelization seems to only work for simpler functions, but not custom functions (unless if it uses simple subfunctions like max, sum, etc.)

Could you add information on the benchmarks you performed comparing the processing time and memory usage for:

  • No parallel processing
  • Your data.table approach (let's say 2-4 cores for now)
  • Current approach with parLapply (similarly 2-4 cores)

Additional note: we have plans to migrate our parallel processing to use the matter package, which addresses the memory overhead issue alongside other problems.

@smartinez-yatiribio
Copy link
Author

You know what? you are right. And it only worked for me with a small dataset. I was really seeing all CPUs working at the same time. But when I tried with my real dataset that has 30M rows...it doesnt work. I am still trying to figure this out, because I cannot process that amount of data, and it is not as many as we will have in the future with an Astral.
So, I would disregard my last commit. Sorry for that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants