Skip to content

Conversation

@d-klotz
Copy link
Contributor

@d-klotz d-klotz commented Jun 4, 2025

Version

Published prerelease version: v2.0.0-next.67

Changelog

🚀 Enhancement

  • @friggframework/devtools
    • feat(devtools): add Frigg Authenticator CLI tool #523 (@d-klotz)

🐛 Bug Fix

  • @friggframework/devtools
    • fix(devtools): pass refresh_token to refreshAccessToken in auth-tester #526 (@d-klotz)
  • @friggframework/core, @friggframework/devtools, @friggframework/eslint-config, @friggframework/prettier-config, @friggframework/schemas, @friggframework/serverless-plugin, @friggframework/test, @friggframework/ui
    • fix(requester): improve auth refresh handling and tests #525 (@d-klotz)
    • fix(devtools): include test/ in npm package files #520 (@d-klotz)

Authors: 1

// Removes "Bearer " and trims
const token = authorizationHeader.split(' ')[1].trim();
// Load user for later middleware/routes to use
req.user = await User.newUser({ token });
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

User.newUser is misleading, it does not create a new user, instead it retrieves a user based on the JWT token.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

User.fromToken({ token }) or User.getByToken({ token }) would be a better fit.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup agreed. I was using existing methods though.

const appDefinition = backendJsFile.Definition;

const backend = createFriggBackend(appDefinition);
const loadRouterFromObject = (IntegrationClass, routerObject) => {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Object" is too vague, I have no clue what can be in there or what that means

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we could call it loadIntegrationRoutes

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also the routerObject is already inside of IntegrationClass isn't it?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The overall method of loadIntegrationRoutes makes sense, or integration defined routes.

The load from object method was intended to load from a... route object? Dunno what to call it. The "method", "path", "event" object. I'm not even sure where I got that concept from except it's a common way to represent an http endpoint. Minus the event thing. That I want to be an easy way for a dev to reference when creating event handlers.


for (const routeDef of IntegrationClass.Definition.routes) {
if (typeof routeDef === 'function') {
router.use(basePath, routeDef(IntegrationClass));
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand this.

You're looping through IntegrationClass.Definition.routes and from every routeDef you're passing the same integration as parameter?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you have an example of a place where we declare routes as a function and need it's own integration class as parameter?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not stick to one type of route definition? Either function or object

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the rationale was that in some cases, you need to have some logic that runs outside of express route generation. https://github.com/lefthookhq/frigg-2.0-prototyping/blob/460ad99b85b5a53bac5a859de8b8d3780b85937d/backend/src/testRouter.js#L8

In other cases, relying on the normal instantiation is fine and for those you can reach for either the straight express route or the static object.

I wanted to provide a "quick and easy", "moderate complexity", "high complexity" set of options.

That said I'm not sure what I did was the way to do it.

router[method.toLowerCase()](path, async (req, res, next) => {
try {
const integration = new IntegrationClass({});
await integration.loadModules();
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We already loadModules in the IntegrationBase constructor, we could registerEventHandlers in the constructor as well.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I had an issue of not wanting to load event handlers yet because of some dynamic issues? Or something else. But, happy to not duplicate invoking!

await integration.loadModules();
await integration.registerEventHandlers();
const result = await integration.send(event, {req, res, next});
res.json(result);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we really have to return the result here?
Developers will by instict call the http response when it's available to them. I was not aware that this existed until now and probably other developers won't either.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, the intent was to remove the need to think about express inside the handler function. Just, do what you need, then if you need to return something, return it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But if someones sees a "res" object, one would not simply return and ignore "res".
BTW, one of the reasons I don't like ruby on rails is because it does too much magic and I don't know why and how 🫠

for (const [entityId, key] of Object.entries(
integrationRecord.entityReference
)) {
const moduleInstance =
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are the modules not already instantiated in IntegrationBase -> loadModules() when we do:

const instance = new integrationClass({
userId,
integrationId: params.integrationId,
});

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, looks like loadModules doesn't actualy loads modules but it simply instantiates the module api.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a difference between "load module definitions" and "load module entities into the module instance"

integrationRecord.config.type
);

const instance = new integrationClass({
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rename to integrationInstance

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm for it

// for each entity, get the moduleinstance and load them according to their keys
// If it's the first entity, load the moduleinstance into primary as well
// If it's the second entity, load the moduleinstance into target as well
const moduleTypesAndKeys =
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this comment also does not make sense anymore, right?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The first line does make sense. The second and third lines are for backwards compatibility, where we enforced a "primary" and "target" concept via naming conventions.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we need to maintain that though. Let things error and people correct the errors.

moduleClass &&
typeof moduleClass.definition.getName === 'function'
) {
const moduleType = moduleClass.definition.getName();
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

moduleType also comes from the name?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, though we can debate that

const integrationClassIndex = this.integrationTypes.indexOf(type);
return this.integrationClasses[integrationClassIndex];
}
getModuleTypesAndKeys(integrationClass) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I dont get what this does

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this implementation is fully what I intended. But anywho, the goal is that we grab an integration definition, and from it we can determine which api modules are part of it, which allows us to get the required module entity instances in order to create a complete integration record (TODO allow for a module to be optional and not required on creation of an integration record, potentially make them required on a per event basis, ie we throw an error/force a user to assign a connection or create a new connection if they go to use a feature that is only available if they have a specific module instance added).

The direct use case is during the management inside the frontend experience. The ui should see "user wants to create a HubSpot integration. The HubSpot integration requires two modules. The user has one module connection (entity) available to use, but needs the other one. I'll run the auth flow for the other one and then ask them to confirm the use of that new connection."

The reason I say this implementation may not be what I intended is that we should allow multiple modules of the same type to be assigned different module names so you have something like "slackUser" and "slackApp" both pointing to the slack api module.

Copy link
Contributor

@seanspeaks seanspeaks left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did my comments to your comments and a few added comments in there

{
"$schema": "node_modules/lerna/schemas/lerna-schema.json",
"version": "1.2.2",
"version": "2.0.0-next.0",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have no idea if this should be committed 😅

const appDefinition = backendJsFile.Definition;

const backend = createFriggBackend(appDefinition);
const loadRouterFromObject = (IntegrationClass, routerObject) => {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The overall method of loadIntegrationRoutes makes sense, or integration defined routes.

The load from object method was intended to load from a... route object? Dunno what to call it. The "method", "path", "event" object. I'm not even sure where I got that concept from except it's a common way to represent an http endpoint. Minus the event thing. That I want to be an easy way for a dev to reference when creating event handlers.

router[method.toLowerCase()](path, async (req, res, next) => {
try {
const integration = new IntegrationClass({});
await integration.loadModules();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I had an issue of not wanting to load event handlers yet because of some dynamic issues? Or something else. But, happy to not duplicate invoking!


getIntegrationById: async function(id) {
return IntegrationModel.findById(id);
getIntegrationById: async function (id) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I really need to lock this concept into my brain. Git repo takes the entire definition of "repository" in my head.

const integration =
await integrationFactory.getInstanceFromIntegrationId({
integrationId: integrationRecord.id,
userId: getUserId(req),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we discussed this on our call. Likely just was throwing things that stuck, for context on why this is here.

req.params.credentialId
);
if (credential.user._id.toString() !== getUserId(req)) {
throw Boom.forbidden('Credential does not belong to user');
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there were moments where this failed? Dunno what those moments were/are though.

const {ModuleConstants} = require("./ModuleConstants");
const { ModuleConstants } = require('./ModuleConstants');

class Auther extends Delegate {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All for it. At one point @MichaelRyanWebber and I were debating both naming and intent of the class (hi Michael! Not to rope you in but, you likely can either find the note somewhere or comment on the future improvements you/we had in mind).

this.EntityModel = definition.Entity || this.getEntityModel();
}

static async getInstance(params) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Async construction.

It's a debate in nodejs world. Might be a resolved debate in node 20? But basically "if you need to await/promise something in order to instantiate a class, do you make it an async constructor, or do you create a static async instantiation method?" Aka the get

That's the root of this decision at any rate.

const friggCommand = process.platform === 'win32' ? 'frigg.cmd' : 'frigg'

// Spawn the command
const childProcess = spawn(friggCommand, cmdArgs, {

Check failure

Code scanning / SonarCloud

OS commands should not be vulnerable to command injection attacks High

Change this code to not construct the OS command from user-controlled data. See more on SonarQube Cloud
Copy link

@github-advanced-security github-advanced-security bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CodeQL found more than 20 potential problems in the proposed changes. Check the Files changed tab for more details.

seanspeaks and others added 23 commits October 22, 2025 16:48
…ring-fix-011CUNiSmcNbjYnKa7zuHDCK

fix(webhooks): Ensure webhook routes are defined before catch-all proxy route
- Fixed VPC/Aurora builders to follow correct resource flow:
  stack → orphaned → create
- When managementMode=managed + vpcIsolation=isolated:
  - If stack has resources → reuse them (discover mode)
  - If stack has NO resources → create new (create-new/managed mode)
- Added comprehensive tests proving the fix
- Tests first, code second (proper TDD)

Fixes deployment bug where stack resources were being ignored
- VPC builder: check defaultVpcId (not vpcId)
- Aurora builder: check auroraClusterId (not auroraEndpoint)
- Updated tests to match actual CloudFormation discovery fields
- This completes the TDD fix for stack resource reuse
…use cases

## Problem Summary
Repository layer incorrectly mapped `type` to `subType`, conflating two distinct concepts:
- **Module Type** (`moduleName`): The integration type (salesforce, hubspot, attio) - intrinsic to the API
- **Sub-Type** (`subType`): Optional adopter-defined field to distinguish multiple instances of the same module type

## Critical Bug Fix
**GetModule use case** incorrectly used `entity.type` (mapped from `subType`) instead of `entity.moduleName` to look up module definitions, breaking module instance retrieval.

## Changes Made

### Repository Layer - Remove type↔subType Mapping
Updated the following repositories to eliminate type/subType confusion:
- `packages/core/modules/repositories/module-repository.js`
- `packages/core/modules/repositories/module-repository-mongo.js`
- `packages/core/modules/repositories/module-repository-postgres.js`

Changes:
1. `createEntity()`: Changed `subType: entityData.type || entityData.subType` → `subType: entityData.subType`
2. Return mapping: Changed `type: entity.subType` → `subType: entity.subType`
3. `updateEntity()`: Removed `if (updates.type !== undefined) data.subType = updates.type;`
4. Updated comments: Clarified that Mongoose discriminator (__t) maps to `moduleName`, not `subType`

### Use Cases - Fix Module Type Lookups
Fixed incorrect usage of `entity.type` → `entity.moduleName` in:
- `packages/core/modules/use-cases/get-module.js`
- `packages/core/modules/use-cases/get-entity-options-by-id.js`
- `packages/core/modules/use-cases/refresh-entity-options.js`
- `packages/core/modules/use-cases/test-module-auth.js`

### Schema Updates
Updated `packages/schemas/schemas/core-models.schema.json`:
- Removed `subType` from required fields for both `credentialModel` and `entityModel`
- Added `moduleName` field to `entityModel` with clear description
- Updated `subType` descriptions to clarify its optional, adopter-defined nature

## Impact
- **ModuleFactory** (used by integrations) ✅ Already correctly used `moduleName` - no changes needed
- **GetModule use case** ✅ Now correctly uses `entity.moduleName` instead of wrong `entity.type`
- **Repository layer** ✅ No longer conflates `type` with `subType`

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
- When CloudFormation discovery finds FriggAuroraCluster, query RDS API to get endpoint details
- Sets auroraClusterEndpoint, auroraClusterPort, auroraClusterIdentifier
- Similar to how we query EC2 for VPC ID from security group
- This completes the TDD fix - stack resources are now properly reused in isolated mode
…ciations (TDD)

- Add stack-managed endpoint check in buildVpcEndpoints()
  - Check if endpoint IDs are strings (from CloudFormation stack)
  - Only create CloudFormation resources for non-stack endpoints
  - Log reused endpoints for visibility
- Add ensureSubnetAssociations() method
  - Ensures route table associations exist when VPC endpoints created
  - Heals missing associations for VPC endpoints without NAT Gateway
  - Skips if associations already created by NAT Gateway routing
- Add 3 comprehensive tests (all passing)
  - Test stack-managed VPC endpoint reuse
  - Test VPC endpoint creation when not in stack
  - Test route table association healing
- Fix managementMode test: remove defaultVpcId to properly represent no stack VPC

Fixes VPC endpoint recreation issue (delete/create cycle)
Fixes route table associations missing when no NAT Gateway

Tests: 61 passing (was 60)
Documents the current state of OAuth authorization flow and identifies
a feature gap: framework does not currently support using different
OAuth credentials (client_id, client_secret) for different subTypes
of the same module.

Analysis includes:
- Current authorization flow walkthrough
- Impact assessment of type↔subType mapping fix
- Architecture design for future subType OAuth support
- Workaround patterns for current limitations

This addresses the question of whether the type↔subType fix impacts
integrations that want to use different OAuth apps per subType.

Conclusion: Fix is correct, no breaking changes, but subType OAuth
would require new features to support.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
…onfigs

Traces the exact execution flow when using entityType="hubspot-crm" to
verify whether the framework correctly routes to the right module definition
with the right OAuth credentials.

Key findings:
- ✅ Current code DOES work for separate module types (hubspot-crm vs hubspot-marketing)
- ✅ Each module definition has unique moduleName and separate OAuth env vars
- ✅ ProcessAuthorizationCallback finds definition by moduleName match
- ✅ Entity created with correct moduleName for later retrieval
- ⚠️ This treats them as separate module TYPES, not subTypes of same module
- ❌ SubType-based OAuth (multiple instances of same moduleName) NOT supported

Pattern that works:
  moduleName: "hubspot-crm"      → client_id from HUBSPOT_CRM_CLIENT_ID
  moduleName: "hubspot-marketing" → client_id from HUBSPOT_MARKETING_CLIENT_ID

Pattern that doesn't work:
  moduleName: "hubspot" + subType: "crm"       → no subType OAuth config
  moduleName: "hubspot" + subType: "marketing" → no subType OAuth config

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
Documents that moduleName comes from INSIDE the module definition object,
NOT from the property key used in the modules object.

Key findings:
- Property keys in modules: { 'key': ... } are IGNORED
- Only definition.moduleName field is used for lookups
- Object.values() extracts definitions, discarding keys
- Keys can be anything - purely for developer organization
- Best practice: match keys to moduleName for clarity

This addresses the question of whether moduleName comes from the
developer's property key choice or from the definition object itself.

Includes examples showing:
- Conventional matching keys (recommended)
- Non-matching keys (works but confusing)
- Duplicate moduleName values (breaks - unreachable modules)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
…ility

Documents the distinct purposes of moduleName vs subType and when each
should be used.

Key findings:
- moduleName: Design-time/static (defined in code, requires redeploy to change)
- subType: Runtime/dynamic (set when creating entity, no code changes needed)
- Different purposes, both have value

SubType is essential for:
✅ Multiple instances of same module type (unlimited Slack workspaces)
✅ User-friendly instance labels (personal vs work)
✅ Runtime filtering/querying
✅ Multi-tenant scenarios
✅ No code changes needed to add instances

SubType is NOT for:
❌ Different OAuth credentials (use separate moduleName)
❌ Different API endpoints (use separate moduleName)
❌ Different module behavior (use separate moduleName)

Examples:
- moduleName='slack' + subType='acme-corp' (runtime label)
- moduleName='slack' + subType='personal' (runtime label)
vs
- moduleName='hubspot-crm' (different OAuth config, design-time)
- moduleName='hubspot-marketing' (different OAuth config, design-time)

Recommendation: Keep subType as optional field for runtime flexibility.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
SQS endpoint is needed for job queues and async processing broadly,
not just for postgres database migrations.

- Remove postgres.enable condition from SQS endpoint creation
- Update comment to reflect general SQS usage
- Update test expectations to include SQS endpoint
- Update security group condition

Tests: 61 passing
Documents the discovery that subType field is vestigial/unused:
- No authorization route accepts subType parameter
- ProcessAuthorizationCallback does not set subType when creating entities
- No update entity routes exist to set it later
- No code in entire codebase sets subType to any value
- Field exists in schema and can be queried but is never populated

Evidence:
❌ GET /api/authorize - only accepts entityType
❌ POST /api/authorize - only accepts entityType and data
❌ ProcessAuthorizationCallback.createEntity - does not set subType
❌ No PATCH/PUT /api/entities routes
❌ grep "createEntity.*subType" - zero matches

Current state: subType can be stored and queried but never set or used.

Historical context: Likely vestigial from Mongoose discriminator (__t)
migration where __t was confusingly mapped to both moduleName and subType.

Options presented:
1. Remove subType (recommended) - clean up vestigial field
2. Implement subType support - add to auth flow, enable multiple instances
3. Document as manual-only - keep but mark as advanced/internal use

Recommendation: Remove (Option 1) unless there's explicit need for
multiple instances per module type, in which case implement properly
(Option 2) with full authorization flow support.

Questions to answer before deciding:
- Does production data have subType values set?
- Do adopters manually use subType via repository?
- Is multiple instances per module type needed?

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
Remove subType field which was never set or used in any code path.
This field was a remnant from Mongoose discriminator (__t) migration
where it was confusingly mapped alongside moduleName.

Changes:
- Prisma schemas (MongoDB & PostgreSQL): Removed subType column from Credential and Entity models
- Repositories: Removed all subType references from create, update, filter methods
- Use cases: Removed subType from GetModule return value
- JSON schemas: Removed subType property definitions and examples
- TypeScript types: Removed subType from type definitions
- Mongoose models: Removed subType from Entity model schema

Impact:
✅ No functional changes - field was never populated
✅ Cleaner codebase - removes confusion
✅ Simpler architecture - one less field to maintain

Note: moduleName serves the purpose of identifying module type.
For multiple instances of same module type, use separate moduleName
values (e.g., 'slack-workspace-1', 'slack-workspace-2').

Database migration will be needed to drop subType columns.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
Provides step-by-step instructions for:
- Checking existing data (should be none)
- Generating Prisma migrations
- Deploying to production
- Verifying successful migration
- Rollback plan if needed

Includes SQL/queries for both MongoDB and PostgreSQL.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
Remove temporary analysis documents that were created during investigation:
- SUBTYPE_OAUTH_ANALYSIS.md
- POST_AUTHORIZE_TRACE.md
- MODULE_NAME_SOURCE_ANALYSIS.md
- SUBTYPE_VALUE_ANALYSIS.md
- SUBTYPE_CRITICAL_FINDING.md
- SUBTYPE_REMOVAL_MIGRATION_GUIDE.md

These were helpful for understanding the changes but are not needed
in the final codebase.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
…apping-011CUPEgXrH6n8pAzwNxRrfi

Fix type↔subType Mapping Confusion and Remove Vestigial subType Field
When CloudFormation discovery finds a VPC but subnet resources are not in the stack (e.g., CREATE_FAILED but resources still exist in AWS), fall back to querying EC2 directly for Frigg-managed subnets.

This fixes the issue where VPC builder tries to create new subnets with conflicting CIDRs when subnets already exist but aren't tracked in CloudFormation.

Changes:
- Added EC2 subnet query fallback in CloudFormation discovery
- Filters by VPC ID and ManagedBy tag to find Frigg-managed subnets
- Extracts private/public subnet IDs and sorts by logical ID
- Added comprehensive tests for EC2 fallback behavior

Closes #ISSUE
When CloudFormation stack has VPC but subnet resources failed to create (CREATE_FAILED then DELETE_SKIPPED), subnets still exist in AWS but aren't tracked. Add fallback to query EC2 directly for Frigg-managed subnets tagged with ManagedBy='Frigg'.

This ensures subnet IDs are discovered even when CloudFormation state is inconsistent with actual AWS resources.
…tion stack

## Problem
KMS key alias was only checked within CloudFormation stack resources. If the
alias was created outside CloudFormation (manually or via another process), the
infrastructure code would not discover it and would attempt to create a new key
instead of using the existing one.

## Solution
Following DDD and hexagonal architecture principles:

1. Added `describeKmsKey()` method to AWS Provider Adapter
   - Implements port/adapter pattern for KMS queries
   - Enables testing without AWS SDK dependencies

2. Enhanced CloudFormation Discovery to check for KMS aliases
   - Checks `FriggKMSKeyAlias` resources in CloudFormation stack
   - Queries AWS API for expected alias name (based on serviceName/stage)
   - Works even when alias exists outside CloudFormation management

3. Fixed resource extraction to always run AWS queries
   - Changed to call `_extractFromResources()` even with empty resource list
   - Ensures AWS API queries run regardless of CloudFormation stack state

4. Added comprehensive test coverage (TDD)
   - 6 tests covering all KMS alias discovery scenarios
   - Tests verify behavior with mocked dependencies
   - All tests passing

## Impact
- Uses existing KMS keys when alias exists, avoiding duplicate key creation
- Reduces infrastructure costs by reusing existing resources
- Improves idempotency of infrastructure deployment
- Maintains backward compatibility with existing deployments

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
…ra-fix-011CUQ5APFUTQGK8RbVDN8eW

fix(infrastructure): Discover KMS key alias even if not in CloudForma…
claude and others added 13 commits December 11, 2025 22:00
The base-definition-factory.js was missing .env file exclusions in both
skipEsbuildPackageConfig and functionPackageConfig. This caused local
.env files to be included in deployed Lambda packages.

Added exclusion patterns for:
- .env
- .env.*
- .env.local
- .env.*.local
- **/.env
- **/.env.*

This ensures environment files are never deployed to Lambda, preventing
accidental exposure of secrets.
…oy-014RPjyrZmnx4k33iUhiBzdj

fix: Exclude .env files from serverless package deployment
The doctor-command requires files from infrastructure/domains/health/
but the infrastructure/ directory was not included in the package.json
files array. This caused deployments to fail with:

Error: Cannot find module '../../infrastructure/domains/health/domain/value-objects/stack-identifier'

Add infrastructure/ to the files array to ensure it's published to npm.
…e-in-devtools-npm-package

fix(devtools): include infrastructure/ in npm package files
The devtools/index.js exports the test module, but the test/ directory
was not included in the package.json files array. This caused builds
to fail with:

Error: Cannot find module './test'

Add test/ to the files array to ensure it's published to npm.
Add documentation about creating pull requests for the Frigg Framework:
- Always target the 'next' branch
- Always add 'release' and 'prerelease' labels
- Ensure project compiles before creating PR
…ols-npm-package

fix(devtools): include test/ in npm package files
Add `frigg auth` command for testing API module authentication flows
without deploying full Frigg infrastructure.

Features:
- OAuth2 authentication with local callback server
- API-Key authentication support
- Credential persistence to .frigg-credentials.json
- Comprehensive testing of all requiredAuthMethods

Commands:
- `frigg auth test <module>` - Test authentication flow
- `frigg auth list` - List saved credentials
- `frigg auth get <module>` - Retrieve credentials
- `frigg auth delete [module]` - Remove credentials
API-Key modules with `getAuthorizationRequirements` now render an interactive
CLI form instead of requiring the `--api-key` flag. The form:

- Displays title from jsonSchema.title
- Shows help text from ui:help before each field
- Masks password fields (ui:widget: 'password') with *
- Validates required fields
- Supports multi-field forms (e.g., company ID, public key, private key)

The `--api-key` flag still works and takes precedence over the form.

Files added:
- json-schema-form.js - Renders JSON Schema as CLI prompts using @inquirer/prompts

Files modified:
- api-key-flow.js - Calls getAuthorizationRequirements when no --api-key provided
- index.js - Removed hard requirement for --api-key flag
- README.md, CLAUDE.md, SKILL.md - Documentation updates
- Remove sample API call step (testAuthRequest is authoritative)
- Fix credentials saved under CLI arg instead of actual module name
- Remove --no-browser option (always open browser for OAuth)
feat(devtools): add Frigg Authenticator CLI tool
async executeFriggCommand(command, args = [], cwd = process.cwd()) {
return new Promise((resolve, reject) => {
const friggCli = path.join(__dirname, '../../../frigg-cli/index.js');
const child = spawn('node', [friggCli, command, ...args], {

Check failure

Code scanning / SonarCloud

I/O function calls should not be vulnerable to path injection attacks High

Change this code to not construct the path from user-controlled data. See more on SonarQube Cloud
d-klotz and others added 15 commits January 12, 2026 16:43
…for-refresh-auth

Fix: use correct property for refresh auth
The token refresh test was calling `api.refreshAccessToken()` without
arguments, but `OAuth2Requester.refreshAccessToken` expects the refresh
token to be passed in the params object: `{ refresh_token: ... }`.

This caused the token refresh test to fail with:
"Cannot read properties of undefined (reading 'refresh_token')"
…for-refresh-auth

fix(requester): improve auth refresh handling and tests
…resh

fix(devtools): pass refresh_token to refreshAccessToken in auth-tester
…persistence

Add data JSON field to Entity model following the same pattern used for
Credential, enabling persistence of dynamic entity properties from
apiPropertiesToPersist.entity (e.g., domain, region, accountId).

Changes:
- Add data Json field to Entity in both MongoDB and PostgreSQL schemas
- Update all module repositories to handle dynamic data persistence
- Fix DocumentDB updateEntity to not return null on zero modifications
feat(core): add data JSON field to Entity model for dynamic property persistence
The refreshAuth() method was catching errors silently, making it
impossible to diagnose token refresh failures. Added console logging
to capture token refresh attempt details, success confirmation, and
failure details including error message and response data.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…r528

fix(core): add debug logging to OAuth2Requester.refreshAuth()
@sonarqubecloud
Copy link

Quality Gate Failed Quality Gate failed

Failed conditions
317 Security Hotspots
5.6% Duplication on New Code (required ≤ 3%)
D Reliability Rating on New Code (required ≥ A)
E Security Rating on New Code (required ≥ A)

See analysis details on SonarQube Cloud

Catch issues before they fail your Quality Gate with our IDE extension SonarQube for IDE

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants