Skip to content

Conversation

@stephencox
Copy link
Owner

No description provided.

google-labs-jules bot and others added 3 commits May 20, 2025 07:17
This commit introduces several changes to improve performance and code quality:

1.  **CMakeLists.txt:**
    *   Changed compiler optimization level from -O2 to -O3.
    *   Added -Wall, -Wextra, and -pedantic flags to enable more warnings and enforce stricter C standards.

2.  **Tensor Operations (src/pearl_tensor.c):**
    *   Optimized `pearl_tensor_copy` by:
        *   Consolidating the calculation of total elements.
        *   Using `memcpy` for data copying, which can be more efficient than a manual loop.
        *   Included `<string.h>`.

3.  **Layer Updates (src/pearl_layer.c):**
    *   Parallelized the weight and bias update loop in `pearl_layer_update_fully_connected` using OpenMP (`#pragma omp parallel for`). This can speed up the training process for layers with a significant number of parameters.

These changes should provide a general uplift in performance, particularly during the training phase of neural networks, and improve code robustness through stricter compiler checks.
This commit introduces a functional dropout layer to the Pearl_NN library.

Key changes include:

1.  **Dropout Layer Implementation (src/pearl_layer.c, src/pearl_layer.h):**
    *   Added `pearl_layer_forward_dropout` to handle the forward pass.
        *   In training mode, it randomly zeros out neurons based on the dropout rate and scales the remaining activations.
        *   The dropout mask is dynamically created to match input tensor dimensions.
        *   In inference mode, it acts as a pass-through layer.
    *   Added `pearl_layer_backward_dropout` to handle the backward pass, applying the same mask and scaling to gradients.
    *   Integrated these functions into the main `pearl_layer_forward` and `pearl_layer_backward` routines.
    *   Modified `pearl_layer_create_dropout` to initialize the mask tensor (`weights`) to NULL, allowing dynamic creation in the forward pass.

2.  **Training/Inference Mode Control (src/pearl_network.h, src/pearl_network.c, src/pearl_layer.h, src/pearl_layer.c):**
    *   Added a `bool is_training` field to the `pearl_network` struct.
    *   This flag is set by `pearl_network_train_epoch` (true) and `pearl_network_calculate` (false).
    *   The `is_training` flag is propagated to `pearl_layer_forward` and used by the dropout layer to switch its behavior.

3.  **Unit Tests (test/main.c):**
    *   Added `test_dropout_layer_forward_training_mode` to verify correct masking and scaling during training.
    *   Added `test_dropout_layer_forward_inference_mode` to verify pass-through behavior during inference.
    *   Added `test_dropout_layer_backward_pass` to verify correct gradient propagation.
    *   Updated `test_network_add_layers` to reflect changes in dropout layer initialization.
    *   Seeded `rand()` in `setUp()` for test randomness.

4.  **Bug Fix (src/pearl_network.c):**
    *   Resolved a potential double-free issue in `pearl_network_destroy` by removing a redundant `free` call for the input layer.

The dropout layer provides a common regularization technique to prevent overfitting in neural networks. These changes enhance the library's capabilities and robustness.
stephencox and others added 2 commits May 24, 2025 19:21
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
@sonarqubecloud
Copy link

Quality Gate Failed Quality Gate failed

Failed conditions
1 Security Hotspot
C Reliability Rating on New Code (required ≥ A)

See analysis details on SonarQube Cloud

Catch issues before they fail your Quality Gate with our IDE extension SonarQube for IDE

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants