Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
39 commits
Select commit Hold shift + click to select a range
b7122ed
Merge pull request #5 from cypienta/main
ezztahoun Jul 19, 2024
41937d2
update lambda functions
Nidheesh-Panchal Jul 19, 2024
6f651fa
Merge remote-tracking branch 'origin/main' into v0.7_dev
Nidheesh-Panchal Jul 22, 2024
77f9619
add one liner, update headings
Nidheesh-Panchal Jul 22, 2024
f30096e
Adding in intomation about tiemout and the stack creation time
Kevin-Shi-Dev Jul 22, 2024
c250745
Adding handling multipule input files
Kevin-Shi-Dev Jul 22, 2024
4feb9ea
Adding common mistakes
Kevin-Shi-Dev Jul 22, 2024
1d990db
add images
Nidheesh-Panchal Jul 22, 2024
10e6c52
how to delete stack
Nidheesh-Panchal Jul 22, 2024
29c9b6b
manual retrigger
Nidheesh-Panchal Jul 22, 2024
63c51b9
manual retrigger
Nidheesh-Panchal Jul 22, 2024
84badcc
sorting index
Nidheesh-Panchal Jul 22, 2024
6c96662
update docs
Nidheesh-Panchal Jul 23, 2024
72cf87f
update docs
Nidheesh-Panchal Jul 23, 2024
772f570
update docs
Nidheesh-Panchal Jul 23, 2024
bcd4105
update docs
Nidheesh-Panchal Jul 23, 2024
4792034
update docs
Nidheesh-Panchal Jul 23, 2024
7cbdb61
update links
Nidheesh-Panchal Jul 24, 2024
de3c57c
Add note for cron schedule
Nidheesh-Panchal Jul 24, 2024
ae9d43a
add space
Nidheesh-Panchal Jul 24, 2024
edd6ccf
add file size
Nidheesh-Panchal Jul 24, 2024
a388138
update versions
Nidheesh-Panchal Jul 31, 2024
9dc73ab
update version
Nidheesh-Panchal Jul 31, 2024
cfbff5b
add step function to requirements
Nidheesh-Panchal Jul 31, 2024
f505c0b
CD-53: update
Nidheesh-Panchal Aug 6, 2024
8f4b662
CD-53 add flag for direct internal input
Nidheesh-Panchal Aug 6, 2024
104c130
CD-52: update UI version
Nidheesh-Panchal Aug 6, 2024
98c6192
CD-53: update
Nidheesh-Panchal Aug 6, 2024
a203df0
CD-53: update
Nidheesh-Panchal Aug 6, 2024
eb66491
CD-7: sample input for cef and non cef input format
Nidheesh-Panchal Aug 9, 2024
a1a667f
Revert "CD-7: sample input for cef and non cef input format"
Nidheesh-Panchal Aug 13, 2024
84f7241
Revert "CD-53: update"
Nidheesh-Panchal Aug 13, 2024
5fcc701
Revert "CD-53: update"
Nidheesh-Panchal Aug 13, 2024
a321be6
Revert "CD-52: update UI version"
Nidheesh-Panchal Aug 13, 2024
811c571
Revert "CD-53 add flag for direct internal input"
Nidheesh-Panchal Aug 13, 2024
734105c
Revert "CD-53: update"
Nidheesh-Panchal Aug 13, 2024
65d1ded
add JIRA integration details
Nidheesh-Panchal Aug 27, 2024
3a11371
CD-276: add default UI creds
Nidheesh-Panchal Sep 3, 2024
30a3eaa
CD-276: add default UI creds
Nidheesh-Panchal Sep 3, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions docs/source/deploy_ui/start_using.rst
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,9 @@ Start using Cypienta UI
.. image:: resources/home_page.png
:alt: Home page
:align: center

.. note::
The default ``Username`` is ``maestro`` and the default ``Password`` is ``changemenow``


How to use the Hide feature for events in UI
Expand Down
5 changes: 5 additions & 0 deletions docs/source/elastic/elastic.rst
Original file line number Diff line number Diff line change
@@ -1,6 +1,11 @@
Configure Elastic
=================

Prerequisites
-------------

Make sure that you have deployed the Cypienta application detailed in :doc:`../getting_started/deploy` before integrating.

Logstash pipeline from Elastic Search to AWS S3
-----------------------------------------------

Expand Down
85 changes: 72 additions & 13 deletions docs/source/getting_started/deploy.rst
Original file line number Diff line number Diff line change
@@ -1,17 +1,40 @@
AWS Deployment
==============

.. _setup_lambda_repository:
.. _setup_lambda_repository_single_command:

Setup Lambda repository
Setup Lambda repository - Single quick command
-----------------------

1. Navigate to the AWS console, and select ``CloudShell`` at the bottom left of the console. Open the cloud shell in the region you want to deploy.

2. The following command will download shell script to setup lambda repository and output an ECR Image URI. Make note of the ECR Image URI to use in CloudFormation template.

.. code-block:: shell

$ wget https://github.com/cypienta/AWS/raw/v0.7/vrl-lambda.sh && sh vrl-lambda.sh

.. note::
Move to the section :ref:`setup_lambda_repository` for manual detailed steps.

3. Once you make note of the ECR image URI for the VRL lambda, move to the section :ref:`deploy_cloud_formation`

.. _setup_lambda_repository:

Setup Lambda repository - Detailed manual steps
-----------------------

1. Navigate to the AWS console, and select ``CloudShell`` at the bottom left of the console. Open the cloud shell in the region you want to deploy. You may run the following one line command, or continue to next step to run individual commands.

.. code-block:: shell

$ export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text) && export REPO_NAME="cypienta-vrl-lambda" && docker pull public.ecr.aws/p2d2x2s3/cypienta/vrl-lambda:v0.1 && aws ecr create-repository --repository-name ${REPO_NAME} && export ECR_URI="${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com" && aws ecr get-login-password --region ${AWS_REGION} | docker login --username AWS --password-stdin ${ECR_URI} && docker tag public.ecr.aws/p2d2x2s3/cypienta/vrl-lambda:v0.1 ${ECR_URI}/${REPO_NAME}:v0.1 && docker push ${ECR_URI}/${REPO_NAME}:v0.1 && echo ${ECR_URI}/${REPO_NAME}:v0.1

2. Store the AWS Account ID, and ECR repository name to environment variable in cloud shell.

.. code-block:: shell

# Save AWS Account ID as environment variable
$ export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text)

# Replace value with ECR repository name you want to give
Expand Down Expand Up @@ -39,9 +62,13 @@ Setup Lambda repository

.. code-block:: shell

# Create ECR URI for ECR repository
$ export ECR_URI="${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com"
# Login to the ECR repository
$ aws ecr get-login-password --region ${AWS_REGION} | docker login --username AWS --password-stdin ${ECR_URI}
# Tag pulled image to push to ECR repository
$ docker tag public.ecr.aws/p2d2x2s3/cypienta/vrl-lambda:v0.1 ${ECR_URI}/${REPO_NAME}:v0.1
# Push the image to ECR repository
$ docker push ${ECR_URI}/${REPO_NAME}:v0.1

7. Copy the ECR Image URI and make a note of it to use in CloudFormation template
Expand All @@ -50,21 +77,28 @@ Setup Lambda repository

$ echo ${ECR_URI}/${REPO_NAME}:v0.1

8. Once you make note of the ECR image URI for the VRL lambda, move to the section :ref:`deploy_cloud_formation`

.. _deploy_cloud_formation:

Deploy resources using the Cloud Formation template
---------------------------------------------------

1. Clone the Github repo
1. On your local machine, download the template file from Github. `Template file <https://github.com/cypienta/AWS/blob/862fe7a6a28a3be7c8f3367d142d5464a2f52037/template.yaml>`__. Or, use the following command to download the ``template.yaml`` file.

.. code-block:: shell

$ git clone -b v0.7 https://github.com/cypienta/Lambda.git
$ wget https://github.com/cypienta/AWS/raw/v0.7/template.yaml

.. note::
This command will clone the repository and checkout the branch ``v0.7``
Run this command on your local machine. This command will download the template.yaml file.

2. Navigate to the AWS console, and search for ``CloudFormation``.

.. note::
The UI component deployed from this template is only supported in the following AWS Regions. Make sure that you create stack in the supported region.
Supported AWS regions: eu-north-1, ap-south-1, eu-west-3, us-east-2, eu-west-1, eu-central-1, sa-east-1, ap-east-1, us-east-1, ap-northeast-2, eu-west-2, ap-northeast-1, us-west-2, us-west-1, ap-southeast-1, ap-southeast-2, ca-central-1

3. Click on ``Stacks`` on the left hand side panel, and click on ``Create stack`` dropdown. Select ``With new resources (standard)`` to start creating a stack

.. image:: resources/create_stack_start.png
Expand Down Expand Up @@ -98,23 +132,23 @@ Deploy resources using the Cloud Formation template
ATTACK Technique detector. Use version 0.4 Product ARN for the region in which CloudFormation stack is created.

**ClusterModelARN:** The ARN of the subscribed model package for
Temporal Clustering. Use version 0.6 Product ARN for the region in which CloudFormation stack is created.
Temporal Clustering. Use version 0.7.1 Product ARN for the region in which CloudFormation stack is created.

**FlowModelARN:** The ARN of the subscribed model package for MITRE
flow detector. Use version 0.6 Product ARN for the region in which CloudFormation stack is created.
flow detector. Use version 0.7 Product ARN for the region in which CloudFormation stack is created.

**SuperuserEmail:** The email for admin user for UI

**SuperuserUsername:** The username of the admin user for UI

**SuperuserPassword:** The password of the admin user for UI

**VRLLambdaImage:** The container image of the VRL Lambda that was pushed to ECR private repository in :ref:`setup_lambda_repository_single_command`

**WebContainerImage:** The container image of the subscribed marketplace UI product with tag ``market*``. The ``Web container image`` noted in the section :doc:`subscribe`.

**NginxContainerImage:** The container image of the subscribed marketplace UI product with tag ``nginx-market*``. The ``Nginx container image`` noted in the section :doc:`subscribe`.

**VRLLambdaImage:** The container image of the VRL Lambda that was pushed to ECR private repository in :ref:`setup_lambda_repository`

The constraints for choosing the ``Cpu`` and ``Memory`` for the cluster can be found `here <https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ecs-taskdefinition.html#cfn-ecs-taskdefinition-cpu>`__

Recommended value for parameter **ChunkSize** is below ``100000``.
Expand All @@ -128,7 +162,11 @@ Deploy resources using the Cloud Formation template
failure options``, select ``Roll back all stack resources`` for
``Behaviour on provisioning failure``. Select ``Delete all newly
created resources`` for ``Delete newly created resources during a
rollback``. And then click on ``Next``.
rollback``. Expand the options for ``Stack creation options - optional`` and under ``Timeout``, enter ``15`` to set a max timeout of 15 minutes for the stack. And then click on ``Next``.

.. image:: resources/stack_timeout.png
:alt: stack timeout
:align: center

8. Now in the ``Review and create`` page, you can review your parameters.
At the bottom of the page, select all checkboxes for ``I
Expand All @@ -138,10 +176,31 @@ Deploy resources using the Cloud Formation template
9. You can monitor the events of the cloud stack by clicking on the
recently created cloud stack and going to the ``Events`` tab.

.. note::
**Resource Creation Time:** The cloud stack will take approximately 10 minutes to complete the creation of all the resources.

10. Once the cloud stack is completed successfully. You can start using
the products.
the products. Click on the ``Outputs`` tab for the recently created cloud
stack and note down the load balancer URL for the UI under ``LoadBalancerDNSName``.
Click on the link to open the UI.

.. image:: resources/lb_url.png
:alt: lb url
:align: center

Now all your resources are ready to be used.

You may now go to the step :doc:`end_to_end_test` to start testing
your application.

Handling Multiple Inputs
-------------------------

The pipeline will process files in the input folder sequentially in the order of upload.
Only one file will be processed at a time. Once a file is finished be processed the
pipeline will start with the next file in the queue automatically.

.. note::
**Small input files:** For best performance, it is not recommended to upload many
small files due to the startup time overhead of SageMaker jobs.
It is recommended to aggregate small inputs into larger input files.

**Handling Large Input Files:** Currently the pipeline can handle upto 100,000 events in single input file. Be mindful of the input file that is used as input.
7 changes: 7 additions & 0 deletions docs/source/getting_started/prerequisites.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,14 @@ Make sure that you have the required permissions for resources for the IAM user
- ECR
- IAM
- CloudFormation
- Step Functions

To confirm you have the required permssion for the resources necessary to run the
pipeline you can check that with the following script. To run the script the iam user must have ``iam:SimulatePrincipalPolicy`` policy.

.. code-block:: console

$ wget -O- https://raw.githubusercontent.com/cypienta/AWS/v0.7/check_permissions.py | python

Quotas
------
Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
10 changes: 5 additions & 5 deletions docs/source/getting_started/subscription.rst
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ Temporal Clustering
:alt: Subscribe to technique detector
:align: center

3. Click on ``Continue to configuration``. In the section ``Select your launch method``, select ``AWS CloudFormation``. Select the ``Software Version`` as ``0.6`` from the drop down. Select the ``Region`` in which you would want to deploy Cypienta products. Copy and make note of the ``Product Arn``.
3. Click on ``Continue to configuration``. In the section ``Select your launch method``, select ``AWS CloudFormation``. Select the ``Software Version`` as ``0.7.1`` from the drop down. Select the ``Region`` in which you would want to deploy Cypienta products. Copy and make note of the ``Product Arn``.

.. image:: resources/model_arn_cluster.png
:alt: Subscribe to flow detector
Expand All @@ -66,7 +66,7 @@ MITRE ATTACK Flow Detector
:alt: Subscribe to technique detector
:align: center

3. Click on ``Continue to configuration``. In the section ``Select your launch method``, select ``AWS CloudFormation``. Select the ``Software Version`` as ``0.6`` from the drop down. Select the ``Region`` in which you would want to deploy Cypienta products. Copy and make note of the ``Product Arn``.
3. Click on ``Continue to configuration``. In the section ``Select your launch method``, select ``AWS CloudFormation``. Select the ``Software Version`` as ``0.7`` from the drop down. Select the ``Region`` in which you would want to deploy Cypienta products. Copy and make note of the ``Product Arn``.

.. image:: resources/model_arn_flow.png
:alt: Subscribe to technique detector
Expand All @@ -90,7 +90,7 @@ Cypienta User Interface (UI)
:alt: confirm subscribe
:align: center

4. Select the ``Fulfillment option`` as ``ECS``. Select the ``Software version`` as ``v0.1.2``. Then click on ``Continue to Launch``
4. Select the ``Fulfillment option`` as ``ECS``. Select the ``Software version`` as ``v0.2.2``. Then click on ``Continue to Launch``

.. image:: resources/to_launch.png
:alt: to launch
Expand All @@ -111,12 +111,12 @@ Cypienta User Interface (UI)
--username AWS \
--password-stdin 709825985650.dkr.ecr.us-east-1.amazonaws.com

CONTAINER_IMAGES="709825985650.dkr.ecr.us-east-1.amazonaws.com/cypienta/cytech:nginx-marketv0.0.3,709825985650.dkr.ecr.us-east-1.amazonaws.com/cypienta/cytech:marketv0.1.2"
CONTAINER_IMAGES="709825985650.dkr.ecr.us-east-1.amazonaws.com/cypienta/cytech:nginx-marketv0.0.3,709825985650.dkr.ecr.us-east-1.amazonaws.com/cypienta/cytech:marketv0.2.2"

for i in $(echo $CONTAINER_IMAGES | sed "s/,/ /g"); do docker pull $i; done

Here the two images are:

- **Web container image:** 709825985650.dkr.ecr.us-east-1.amazonaws.com/cypienta/cytech:marketv0.1.2
- **Web container image:** 709825985650.dkr.ecr.us-east-1.amazonaws.com/cypienta/cytech:marketv0.2.2

- **Nginx container image:** 709825985650.dkr.ecr.us-east-1.amazonaws.com/cypienta/cytech:nginx-marketv0.0.3
91 changes: 91 additions & 0 deletions docs/source/getting_started/troubleshoot.rst
Original file line number Diff line number Diff line change
Expand Up @@ -22,3 +22,94 @@ AWS console for the region you are using. Search for ``Service Quotas``

3. Select the required instance type from the list and click on ``Request
increase at account level``.

How to delete stack
-------------------

1. Navigate to the AWS console and search for ``CloudWatch``. Make sure you are in the same region in which you created CloudFormation stack.

2. On the left hand side panel, under ``Logs``, click on ``Log groups``. Select all the check boxes for the ``Log groups`` that were created by the CloudFormation stack, click on ``Actions`` dropdown and click on ``Delete log group(s)``, and then click on ``Delete`` button.

3. Next, search for ``S3`` in the AWS console search bar.

4. Select the bucket that was created from the CloudFormation stack and click on ``Empty``. Type in ``permanently delete`` in the confirmation box and click on ``Empty``.

5. Now search for ``CloudFormation`` in the the AWS console search bar.

6. Open the stack that you want to delete and click on ``Delete``. Wait for the entire stack to be deleted before you move on to creating new stack.

.. note::
If there are any failures in deleting the stack, then ``Retry delete``.

To speed up delete for stack, follow the optional steps below:

1. Navigate to AWS console and search for ``ECS`` and select ``Elastic Container Service``.

2. Click on the ECS cluster deployed from the stack. Select all the service from the ``Services`` tab and click on ``Delete service``. Check the box for ``Force delete`` and type in ``delete`` in the confirmation box and then click on ``Delete``.

3. Navigate to AWS console and search for ``EC2``.

4. Manually delete the running EC2 instance with name ``* - <ECS-cluster-name>``. Select all the pertinent instances, click on the ``Instance state`` dropdown and click on ``Terminate instance``.

Common Mistakes
----------------

Some of the common errors that can result in the failure of the CloudFormation stack:

- Duplicate S3 bucket name
- Incorrect arn for Models/UI
- Incorrect Image for VRL Lambda

Duplicate S3 bucket name
~~~~~~~~~~~~~~~~~~~~~~~~

In the case of a duplicate S3 bucket name, delete the failed CloudFormation stack,
then choose a new globally unique S3 bucket name and recreate the stack.

Incorrect arn for Models/UI
~~~~~~~~~~~~~~~~~~~~~~~~

In the case of an incorrect arn for models/UI, delete the failed CloudFormation stack,
then confirm the arns for all models and UI components as seen in :doc:`subscription` and recreate the stack.

Incorrect Image for VRL Lambda
~~~~~~~~~~~~~~~~~~~~~~~~

In the case of an incorrect image for VRL lambda, delete the failed CloudFormation stack,
then ensure that you have the correct ECR Image URI and version number, and recreate the stack.

Common errors
-------------

CapacityError: Unable to provision requested ML compute capacity. Please retry using a different ML instance type.
~~~~~~~~~~~~~~~

If the SageMaker batch transform job fails for ``transform-job-cluster-*`` with the error
``CapacityError: Unable to provision requested ML compute capacity. Please retry using a different ML instance type.``
the batch transform job can be retriggered manually. Follow the steps below to retrigger:

1. Open the lambda function ``create_cluster``.

2. Click on the ``Configuration`` tab, then click on ``Environment variables``.
Click on ``Edit`` button, and click on ``Add environment variable``. Under the ``Key`` text field enter ``batch_transform_job_suffix``, under ``Value`` text field enter any unique value. Limit the text value to length of 3. For example, ``1``. And, click on ``Save`` button.

3. Open the S3 bucket created by the CloudFormation stack. Navigate to ``scratch/output/classification/<unique_id>/``.

4. Select the ``input.json``, click on ``Actions``, click on ``Copy``. On the Copy page, click on ``Browse S3``, click on ``Choose destination``, and then click on ``Copy``.

5. This will trigger a new batch transform job.

If the SageMaker batch transform job fails for ``transform-job-flow-*`` with the error
``CapacityError: Unable to provision requested ML compute capacity. Please retry using a different ML instance type.``
the batch transform job can be retriggered manually. Follow the steps below to retrigger:

1. Open the lambda function ``create_flow``.

2. Click on the ``Configuration`` tab, then click on ``Environment variables``.
Click on ``Edit`` button, and click on ``Add environment variable``. Under the ``Key`` text field enter ``batch_transform_job_suffix``, under ``Value`` text field enter any unique value. Limit the text value to length of 3. For example, ``1``. And, click on ``Save`` button.

3. Open the S3 bucket created by the CloudFormation stack. Navigate to ``scratch/output/cluster/<unique_id>/``.

4. Select the ``input_flow.json``, click on ``Actions``, click on ``Copy``. On the Copy page, click on ``Browse S3``, click on ``Choose destination``, and then click on ``Copy``.

5. This will trigger a new batch transform job.
17 changes: 14 additions & 3 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -22,20 +22,31 @@ In this documentation, you will find detailed instructions for:

.. toctree::
:maxdepth: 1
:caption: Ex Integrations (SIEM, XDR, SOAR)
:caption: Cypienta UI

deploy_ui/start_using

.. toctree::
:maxdepth: 1
:caption: Splunk Integration

splunk/splunk
splunk/vrl
splunk/output

.. toctree::
:maxdepth: 1
:caption: Elastic Integration

elastic/elastic
elastic/vrl
elastic/output

.. toctree::
:maxdepth: 1
:caption: Cypienta UI
:caption: JIRA Integration

deploy_ui/start_using
jira/jira

.. toctree::
:maxdepth: 1
Expand Down
Loading