-
Notifications
You must be signed in to change notification settings - Fork 28
Description
Is your feature request related to a problem? Please describe.
I am running 2nd-order leakage analysis with PROLEAD and I am hitting very severe memory usage, to the point where the process is killed by the operating system, even for relatively small designs.
Concretely:
- For a simple DOM-masked design with an 8-bit output, a 2nd-order analysis over 10 cycles and 128,000 simulations already causes the memory usage to grow beyond 64 GB, and the process gets killed by the OS.
- For an S-box implementation using a 256-bit key, the memory usage of 2nd-order analysis becomes even more extreme, which makes it practically impossible to run with a sufficiently large number of simulations to observe subtle/cumulative leakage effects.
I would like to confirm whether this behavior is expected for 2nd-order analysis in PROLEAD, and whether there is a recommended workflow to deal with such memory requirements in a more systematic way.
Describe the solution you'd like
My main goal is not to minimize memory usage at all costs, but rather to have clear guidance and examples so that I can fully and efficiently use the available memory (e.g., 64 GB or more) to perform meaningful higher-order evaluations.
Concretely, it would be very helpful to have:
-
High-order example configurations
- Official example configurations (JSON/settings) for 2nd-order (and possibly higher-order) analysis on:
- a small DOM-masked design (e.g., 8-bit output),
- a larger design such as an S-box with a 256-bit key.
- These examples could illustrate “typical” parameter choices for:
- number of probes,
- number of cycles,
- number of simulations/traces,
- use of
compactmode and other relevant options.
- Official example configurations (JSON/settings) for 2nd-order (and possibly higher-order) analysis on:
-
Recommended configuration guidelines to exploit available memory
- Rules-of-thumb or tables like:
- “With 32 GB / 64 GB / 128 GB RAM and a design of size X, a practical 2nd-order configuration could be: Y probes, Z cycles, N simulations.”
- Guidance on how to scale the parameters when increasing RAM:
- If I double the RAM, should I primarily increase the number of simulations, the number of probes, or cycles to get the most informative higher-order evaluation?
- Rules-of-thumb or tables like:
-
(Optional) Documentation on expected complexity
- A short explanation of how memory usage grows with:
- order of analysis,
- number of probes,
- number of cycles,
- number of simulations.
- This would help users plan experiments that push their hardware reasonably close to its limits while staying stable.
- A short explanation of how memory usage grows with:
If such examples and guidelines already exist (e.g., in previous versions, papers, or internal documentation), making them more visible in the current documentation would already be a very valuable solution.
Describe alternatives you've considered
So far, to keep memory usage manageable, I have tried the following approaches:
- Using
compactmode for 2nd-order analysis whenever possible. - Reducing the number of probes and the number of cycles included in the analysis.
- Reducing the number of simulations (traces) per run, and conceptually thinking about running multiple smaller experiments.
However, all of these alternatives come with non-trivial trade-offs:
- Reducing probes/cycles may miss relevant leakage locations or time points.
- Reducing the number of simulations may make it difficult to observe cumulative/weak 2nd-order leakage effects.
- Manually splitting experiments and combining results is possible in principle, but it is not obvious how to do this in a statistically sound and tool-supported way.
This is why I am specifically looking for recommended high-order configurations and examples that show how PROLEAD is intended to be used in practice when higher memory budgets are available.
Additional context
- Example setup where I observe the issue:
- DOM-masked design with an 8-bit output;
- 2nd-order analysis over 10 cycles;
- 128,000 simulations;
- Memory usage grows beyond 64 GB and the process is killed by the OS.
- For a 2nd-order analysis of an S-box implementation with a 256-bit key, the memory consumption is even higher, making it impractical to use a sufficiently large number of simulations.
My primary interest is to perform as strong and realistic a higher-order security evaluation as possible, by using the available memory efficiently, rather than simply reducing memory usage.
In addition, I remember that there used to be some examples or documentation related to 2nd-order analysis in earlier versions or materials of PROLEAD. I cannot find them anymore in the current repository or documentation. Are there any up-to-date example configurations for 2nd-order analysis that you would recommend as reference?
If it is helpful, I can provide more detailed configuration files or logs.
(For reference, my email is: yfm23@mails.tsinghua.edu.cn)