Skip to content

Conversation

@ycombinator
Copy link
Contributor

@ycombinator ycombinator commented Dec 11, 2025

What does this PR do?

This PR identifies Service Runtime components with only their input type; the output ID is not longer used.

Why is it important?

Service Runtime components are intended to be kept running (via a service) for as long as possible. We should only install/start or uninstall/stop them if they are being explicitly added or removed, respectively, from the component model. If only their configuration is being updated, we should keep the component running.

If a component's ID changes between the last and current component models, Elastic Agent will ask the component's service to uninstall and then reinstall itself. Prior to this PR, service components' ID were determined by their input type and output ID. Therefore, if a service component's output were changed, it would cause the service to be uninstall and then reinstalled. This is undesirable behavior, as services should be kept running as long as possible.

With the changes in this PR, we no longer consider the output ID when generating service components' IDs. If a service component's output is changed, it's ID remains the same between the last and current component models. Elastic Agent does not uninstall and reinstall the component's service but simply passes the configuration change to it (which it was doing prior to this PR anyway).

Checklist

  • I have read and understood the pull request guidelines of this project.
  • My code follows the style guidelines of this project
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • I have made corresponding change to the default configuration files
  • I have added tests that prove my fix is effective or that my feature works
  • I have added an entry in ./changelog/fragments using the changelog tool
  • I have added an integration test or an E2E test

Disruptive User Impact

None.

How to test this PR locally

Policy reassign does not uninstall/reinstall Endpoint

  1. Using the Fleet UI, create three Agent policies:
    • default: containing only the system integration
    • tp-es: containing the Elastic Defend integration, with tamper protection enabled, and using the Elasticsearch output.
    • tp-ls: containing the Elastic Defend integration, with tamper protection enabled, and using the Logstash output. Note that you will need to create the Logstash output in Fleet > Settings.
  2. Enroll an Elastic Agent in the tp-es policy and verify the agent is healthy and shipping data.
  3. Assign the Agent to the tp-ls policy.
  4. Check the Agent logs and make sure the Endpoint component is not being uninstalled and reinstalled. Concretely, check that there is no log entry for uninstall endpoint service.
  5. Check the Endpoint logs (located under /opt/Elastic/Endpoint/state/log/ on Linux) and make sure that Endpoint has connected to Logstash (or has attempted to and failed if there is no actual Logstash endpoint listening).

Removing Endpoint from policy uninstalls Endpoint

  1. Assign the Agent to the default policy.
  2. Check the Agent logs and make sure the Endpoint component is stopped and uninstalled. Concretely, check that there is a log entry for stopping endpoint service runtime, followed by uninstall endpoint service, followed by Stopped: endpoint service runtime.

Related issues

Questions to ask yourself

  • How are we going to support this in production?
  • How are we going to measure its adoption?
  • How are we going to debug this?
  • What are the metrics I should take care of?
  • ...

@mergify
Copy link
Contributor

mergify bot commented Dec 11, 2025

This pull request does not have a backport label. Could you fix it @ycombinator? 🙏
To fixup this pull request, you need to add the backport labels for the needed
branches, such as:

  • backport-./d./d is the label that automatically backports to the 8./d branch. /d is the digit
  • backport-active-all is the label that automatically backports to all active branches.
  • backport-active-8 is the label that automatically backports to all active minor branches for the 8 major.
  • backport-active-9 is the label that automatically backports to all active minor branches for the 9 major.

@ycombinator ycombinator changed the title Avoid stopping and stopping service components on policy change Avoid starting and stopping service components on policy change Dec 12, 2025
@ycombinator ycombinator force-pushed the service-component-avoid-stop-start branch from 84a4523 to 1951fec Compare December 12, 2025 14:53
@ycombinator ycombinator marked this pull request as ready for review December 12, 2025 14:54
@ycombinator ycombinator requested a review from a team as a code owner December 12, 2025 14:54
@ycombinator ycombinator added the Team:Elastic-Agent-Control-Plane Label for the Agent Control Plane team label Dec 12, 2025
@elasticmachine
Copy link
Contributor

Pinging @elastic/elastic-agent-control-plane (Team:Elastic-Agent-Control-Plane)

@cmacknz
Copy link
Member

cmacknz commented Dec 15, 2025

Dropping the output ID makes sense to me, but I think if you do a policy assignment the input ID also changes, which would cause an uninstall and reinstall.

Does the policy reassignment case work properly for tamper protected agents? Or is this only handling the case where the output is changed within the same policy?

@ycombinator
Copy link
Contributor Author

Dropping the output ID makes sense to me, but I think if you do a policy assignment the input ID also changes, which would cause an uninstall and reinstall.

As discussed in today's meeting, the component ID uses the input type (not input ID) so, with the change in this PR, we should still prevent an uninstall and reinstall.

@ycombinator ycombinator changed the title Avoid starting and stopping service components on policy change Avoid uninstalling and re-installing service components on policy change Dec 16, 2025
@ycombinator
Copy link
Contributor Author

ycombinator commented Dec 16, 2025

Does the policy reassignment case work properly for tamper protected agents? Or is this only handling the case where the output is changed within the same policy?

I will test this along with the upgrade scenario.

Yes, it does. I forgot that that's how I'd tested this PR to begin with. I even added manual testing steps to the PR's description. 🤦

I will test the upgrade scenario next.

@ycombinator
Copy link
Contributor Author

I will test the upgrade scenario next.

Okay, so upgrading works in that nothing is broken after upgrade. Here's the output of elastic-agent status before, during, and after the upgrade:

Before

┌─ fleet
│  └─ status: (HEALTHY) Connected
└─ elastic-agent
   ├─ status: (HEALTHY) Running
   ├─ info
   │  ├─ id: a4d2bed3-5c18-42d0-bc55-cc838f2a43a6
   │  ├─ version: 9.2.0-SNAPSHOT
   │  └─ commit: 07d5c3a34e486f81130d7177678c547944dbb84c
   └─ endpoint-default
      ├─ status: (HEALTHY) Healthy: communicating with endpoint service
      ├─ endpoint-default
      │  ├─ status: (HEALTHY) Applied policy 'defend' (Defend policy rev. 1. Agent policy rev. 21.)
      │  └─ type: OUTPUT
      └─ endpoint-default-a92a4f1c-8773-43ce-8e97-062fc2cebb1f
         ├─ status: (HEALTHY) Applied policy 'defend' (Defend policy rev. 1. Agent policy rev. 21.)
         └─ type: INPUT

During

┌─ fleet
│  └─ status: (STARTING)
├─ elastic-agent
│  ├─ status: (HEALTHY) Running
│  ├─ info
│  │  ├─ id: 6c6af8b5-7e88-42d0-8d71-70afbcfa39f1
│  │  ├─ version: 9.3.0
│  │  └─ commit: 7026d24f297cdcd6826da98ec3c39ea7f0c59c19
│  └─ endpoint
│     ├─ status: (HEALTHY) Healthy: communicating with endpoint service
│     ├─ endpoint
│     │  ├─ status: (HEALTHY) Applied policy 'defend' (Defend policy rev. 1. Agent policy rev. 21.)
│     │  └─ type: OUTPUT
│     └─ endpoint-a92a4f1c-8773-43ce-8e97-062fc2cebb1f
│        ├─ status: (HEALTHY) Applied policy 'defend' (Defend policy rev. 1. Agent policy rev. 21.)
│        └─ type: INPUT
└─ upgrade_details
   ├─ target_version: 9.3.0
   ├─ state: UPG_WATCHING
   ├─ action_id: 349b43f0-973b-4b40-ab21-ee082debcfaf
   └─ metadata

After

┌─ fleet
│  └─ status: (HEALTHY) Connected
└─ elastic-agent
   ├─ status: (HEALTHY) Running
   ├─ info
   │  ├─ id: 6c6af8b5-7e88-42d0-8d71-70afbcfa39f1
   │  ├─ version: 9.3.0
   │  └─ commit: 7026d24f297cdcd6826da98ec3c39ea7f0c59c19
   └─ endpoint
      ├─ status: (HEALTHY) Healthy: communicating with endpoint service
      ├─ endpoint
      │  ├─ status: (HEALTHY) Applied policy 'defend' (Defend policy rev. 1. Agent policy rev. 21.)
      │  └─ type: OUTPUT
      └─ endpoint-a92a4f1c-8773-43ce-8e97-062fc2cebb1f
         ├─ status: (HEALTHY) Applied policy 'defend' (Defend policy rev. 1. Agent policy rev. 21.)
         └─ type: INPUT```

Notice that the ID of the Endpoint component after the upgrade changes to `endpoint`.  Specifically, the ID no longer has the output ID, `default`, as the suffix.

@cmacknz cmacknz added the backport-9.3 Automated backport to the 9.3 branch label Dec 17, 2025
@cmacknz
Copy link
Member

cmacknz commented Dec 17, 2025

What's the right additional testing to add for this? It looks correct by inspection, but I think we should add something proving it does what we think.

If we could get a simulated policy reassignment (input id: key change) and output change as a test of just the component model that's probably the lightest thing we could do.

If policy reassignment always fails without this change, that's a test case that is missing from the endpoint integration tests, so we could add that. I also don't think we have a test that does an upgrade with defend installed but that is even heavier, though we have had lots of problems around this discovered in the field so maybe worth it.

@ycombinator ycombinator force-pushed the service-component-avoid-stop-start branch from d7bf91b to e892133 Compare December 22, 2025 04:48
@ycombinator
Copy link
Contributor Author

ycombinator commented Dec 22, 2025

If we could get a simulated policy reassignment (input id: key change) and output change as a test of just the component model that's probably the lightest thing we could do.

Added a unit test case for input ID change to TestComponentUpdateDiff in cf89151.

A unit test that tests just changing the output already exists in TestComponentUpdateDiff:

But I added additional assertions for that test case to make sure no components were added, removed, or updated in the component model as a result of such a change: 4e352a7#diff-21de8fb82a5ebccaa0ca3afb0cb8bfbe10d5b8f920021785c2d485ee35a3556cR225-R227

require.NotZero(t, secondPID)

// Assert that endpoint process IDs haven't changed across policy reassignment
require.Equal(t, firstPID, secondPID)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is checking that Endpoint hasn't restarted good enough for this test? Or should we try to check that Endpoint has not be uninstalled/reinstalled — that might be more involved and require looking at the Endpoint service's logs in the service manager.

@ycombinator
Copy link
Contributor Author

ycombinator commented Dec 22, 2025

If policy reassignment always fails without this change, that's a test case that is missing from the endpoint integration tests, so we could add that.

Added in 8a1a8c3

@ycombinator ycombinator force-pushed the service-component-avoid-stop-start branch from 13f5ff1 to 8a1a8c3 Compare December 22, 2025 07:13
@ycombinator
Copy link
Contributor Author

I also don't think we have a test that does an upgrade with defend installed...
Looks like we do have some integration tests around upgrading Endpoint:

func TestUpgradeAgentWithTamperProtectedEndpoint_DEB(t *testing.T) {
func TestUpgradeAgentWithTamperProtectedEndpoint_RPM(t *testing.T) {

@ycombinator ycombinator requested a review from cmacknz December 22, 2025 07:15
@elasticmachine
Copy link
Contributor

elasticmachine commented Dec 23, 2025

💔 Build Failed

Failed CI Steps

History

cc @ycombinator

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

backport-9.3 Automated backport to the 9.3 branch Team:Elastic-Agent-Control-Plane Label for the Agent Control Plane team

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Cannot successfully change output type or name in tamper protected agent polices that contain Elastic Defend

3 participants