-
Notifications
You must be signed in to change notification settings - Fork 87
feat: compressed mint photon api #2198
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Important Review skippedReview was skipped due to path filters ⛔ Files ignored due to path filters (3)
CodeRabbit blocks several paths by default. You can override this behavior by explicitly including those paths in the path filters. For example, including You can disable this status message by setting the
📝 WalkthroughWalkthroughCentralizes bootstrap orchestration (new async run_bootstrap/BootstrapResult), adds Photon/indexer compressed-mint and interface APIs and models, integrates indexer→client interface conversions and RPCs, extends test harnesses with indexer interface tests, and several supporting refactors and docs tweaks. Changes
Sequence Diagram(s)sequenceDiagram
participant Forester as Forester
participant Shutdown as ShutdownListener
participant HTTP as HTTP Client
participant Indexer as Photon/Indexer
participant Processor as process_fn
Forester->>Shutdown: attach optional oneshot shutdown receiver
Forester->>HTTP: build or reuse HTTP client
Forester->>Indexer: call bootstrap (standard or v2) via run_bootstrap
Indexer->>HTTP: request paginated account pages
HTTP->>Processor: deliver RawAccountData
Processor->>Forester: return processing result (inserted/fetched)
Forester->>Shutdown: abort/cleanup shutdown listener
Forester-->>Forester: return BootstrapResult{pages,fetched,inserted}
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Suggested labels
Suggested reviewers
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 17
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
forester/src/config.rs (1)
365-399: Include the offending entry incompressible_pda_programsparse errors.When multiple entries are provided, returning only the parser error can make it hard to spot which value failed. Consider echoing the raw entry alongside the error.
Proposed refinement
- .map(|s| { - s.parse::<crate::compressible::config::PdaProgramConfig>() - .map_err(|e| ConfigError::InvalidArguments { - field: "compressible_pda_programs", - invalid_values: vec![e], - }) - }) + .map(|s| { + let raw = s.clone(); + s.parse::<crate::compressible::config::PdaProgramConfig>() + .map_err(|e| ConfigError::InvalidArguments { + field: "compressible_pda_programs", + invalid_values: vec![format!("{raw}: {e}")], + }) + })forester/tests/test_compressible_ctoken.rs (1)
252-270: Wait for subscription-driven tracker updates before asserting.These assertions can race the websocket-driven tracker updates (airdrop + compression), which will intermittently flake under load. Add a small polling loop with timeout before asserting lamports and remaining accounts.
🐛 Proposed fix (apply the same pattern after compression)
- let accounts = tracker.get_all_token_accounts(); - assert_eq!(accounts[0].lamports, lamports + 10_000_000); + let accounts = tokio::time::timeout(Duration::from_secs(5), async { + loop { + let accounts = tracker.get_all_token_accounts(); + if accounts[0].lamports == lamports + 10_000_000 { + break accounts; + } + sleep(Duration::from_millis(200)).await; + } + }) + .await + .expect("Tracker did not observe lamports update in time"); + assert_eq!(accounts[0].lamports, lamports + 10_000_000);Also applies to: 334-341
forester/src/compressible/subscriber.rs (1)
176-225: Handle account closures by invokinghandle_removal.
programSubscribesends updates for closed accounts withlamports == 0(often empty data). The current path always tries to decode andhandle_update, which leaves stale entries in trackers. Add a guard to callhandle_removalwhen lamports are zero or data is empty.🐛 Proposed fix
- // Decode Base64 account data + // Account closed? Remove it from the tracker + if response.value.account.lamports == 0 { + self.handler.handle_removal(&pubkey); + return; + } + + // Decode Base64 account data use solana_account_decoder::UiAccountData;
🤖 Fix all issues with AI agents
In `@forester/Cargo.toml`:
- Line 46: The crate currently pins hex = "0.4" in its Cargo.toml; replace that
explicit version with a workspace-managed dependency by changing the dependency
entry to use workspace = true (e.g., hex = { workspace = true }) and ensure the
workspace root contains a [workspace.dependencies] entry for hex (adding hex =
"0.4" there if missing) so all workspace crates share the same hex version.
In `@forester/src/compressible/bootstrap_helpers.rs`:
- Around line 126-161: The get_current_slot function duplicates timeout, HTTP
and RPC error handling; refactor it to call the existing send_rpc_request helper
instead of performing its own client.post/.send/.json and timeout logic. Build
the same JSON payload inside get_current_slot, call send_rpc_request(rpc_url,
payload, RPC_REQUEST_TIMEOUT, client) (or the send_rpc_request signature used in
the repo), then extract the slot via result.get("result").and_then(|v|
v.as_u64()).ok_or_else(|| anyhow::anyhow!("Failed to extract slot from
response")); remove the timeout block and the duplicated error mapping so slot
fetching reuses send_rpc_request’s consistent HTTP status and RPC error
handling.
In `@forester/src/compressible/config.rs`:
- Around line 35-38: Update the doc comments that currently mention
`discriminator_hex` and `--pda-program` to reflect that parsing/serde uses
base58 and the CLI flag is `--compressible-pda-program`; specifically edit the
top comment block for the compressible PDA program configuration (the doc above
the compressible PDA config struct/type in config.rs) and the other occurrences
around lines 148-151 to say "base58" instead of "discriminator_hex" and to
reference `--compressible-pda-program` instead of `--pda-program`, so the
documentation matches the actual parsing and CLI wiring.
In `@forester/src/compressible/ctoken/state.rs`:
- Around line 77-115: The method update_from_token must skip mint/non-token
accounts so the tracker only stores actual token accounts; add an early guard in
update_from_token that detects when the provided Token (ctoken) represents a
mint (not a token account) and return Ok(()) without inserting; use whatever
discriminator the Token type provides (e.g., an is_mint()/is_account() check or
pattern match) to decide this, keeping the rest of the logic
(calculate_compressible_slot, building CTokenAccountState, and calling
self.insert) unchanged; this prevents get_ready_to_compress from including mint
accounts.
In `@forester/src/compressible/mint/compressor.rs`:
- Around line 52-121: compress_batch currently builds all CompressAndCloseMint
instructions and sends them in a single transaction (using instruction_futures
-> instructions -> rpc.create_and_send_transaction), which can exceed Solana
transaction size/compute limits; fix by chunking mint_states into smaller
batches (e.g., max_instructions_per_tx constant) and for each chunk build
instructions via the existing instruction_futures logic and call
create_and_send_transaction per chunk, collecting/returning the final
Signature(s) or changing the method to return a Vec<Signature> or the last
signature, and ensure errors from create_and_send_transaction are mapped as they
are now; update compress_batch signature/return accordingly and reuse existing
variables (rpc_pool, payer_keypair, MintActionParams,
create_mint_action_instruction) so only batching logic is added.
- Around line 183-186: The call to
futures::stream::iter(compression_futures).buffer_unordered(max_concurrent) can
panic if max_concurrent is 0; clamp the concurrency to at least 1 before calling
buffer_unordered (e.g., replace uses of max_concurrent with
max_concurrent.max(1) or create let concurrency = max_concurrent.max(1) and pass
that) in the compressor.rs spot where compression_futures and max_concurrent are
used, and apply the same guard to the PDA compressor and any other
buffer_unordered call that accepts user-supplied concurrency values.
In `@forester/src/compressible/pda/compressor.rs`:
- Around line 189-276: The current implementation builds and sends a single
batched instruction using account_states (via build_compress_accounts_idempotent
and create_and_send_transaction), which can exceed Solana transaction/account
limits; modify compressible logic to chunk account_states into safe-sized
batches (e.g., configurable MAX_BATCH_SIZE) and for each chunk repeat deriving
compressed_addresses, fetching compressed_accounts/hashes, calling
get_validity_proof, building the ix with build_compress_accounts_idempotent, and
sending via create_and_send_transaction, aggregating/returning the signatures
(or return the last signature) and ensure errors are handled per-chunk;
reference account_states, derive_address, compressed_addresses,
compressed_accounts, hashes, proof_with_context,
build_compress_accounts_idempotent, and create_and_send_transaction when
implementing the chunking.
- Around line 165-168: Clamp the concurrency value passed into buffer_unordered
to at least 1 to avoid runtime panics: compute a local capped value (e.g., let
cap = std::cmp::max(1, max_concurrent_batches as usize) or std::cmp::max(1,
max_concurrent_sends as usize) depending on the site) and replace usages of
buffer_unordered(max_concurrent) / buffer_unordered(max_concurrent_batches) with
buffer_unordered(cap). Apply the same pattern where you iterate over
compression_futures / mint compression futures / send transaction futures /
epoch_manager futures so the variables used by buffer_unordered are never zero.
In `@forester/src/compressible/pda/types.rs`:
- Around line 1-28: Add PartialEq and Eq derives to PdaAccountState so it can be
compared in tests; update the struct declaration for PdaAccountState (the pub
struct PdaAccountState { ... }) to include #[derive(Clone, Debug, PartialEq,
Eq)] and ensure no fields (Pubkey or u64) prevent deriving Eq/PartialEq.
In `@forester/src/epoch_manager.rs`:
- Around line 2035-2055: The extra pda_tracker.remove(&account_state.pubkey)
call in the results processing loop duplicates removal already performed by
PdaCompressor::compress_batch_concurrent (and similarly the MintCompressor
counterpart mentioned for lines 2110–2123); remove the redundant remove call(s)
here so that only the compressor implementation owns tracker mutation, leaving
the match arms to only log success/error and update counters (e.g., keep
total_compressed increment on Ok, but delete the pda_tracker.remove invocation),
and ensure the compressor functions remain responsible for calling
pda_tracker.remove when they succeed.
In `@forester/src/lib.rs`:
- Around line 241-259: The current code only spawns the CToken bootstrap when
shutdown_bootstrap is Some, skipping tracker initialization when it's None;
always spawn the tokio task and pass the optional shutdown signal through.
Extract rpc_url and clone ctoken_tracker as before, capture shutdown_bootstrap
(the Option) into a local variable (or clone/move it into the closure) and call
compressible::bootstrap_ctoken_accounts(rpc_url, tracker_clone,
shutdown_bootstrap_option).await inside tokio::spawn, logging errors or
completion as before so the bootstrap runs regardless of whether the shutdown
receiver is present.
In `@forester/tests/test_compressible_mint.rs`:
- Around line 138-148: Replace the current multi-step checks that call
rpc.get_account(mint_pda), deserialize via Mint::deserialize and individually
assert mint.metadata.mint_decompressed with a single assert_eq against an
expected Mint instance: deserialize the on‑chain account into `mint` (using
Mint::deserialize), extract any runtime values (PDAs, timestamps, lamports,
etc.) you need, construct an `expected_mint: Mint` using those runtime values,
then assert_eq!(mint, expected_mint); factor the construction into a small
helper (e.g., build_expected_mint) to keep the test DRY and reuse later in this
file.
- Around line 57-69: The comment and the test disagree about
CreateMintParams.rent_payment semantics: the comment says CreateMint
auto-decompresses when rent_payment > 0 but the test sets rent_payment: 0;
update one to match the intended behavior. Either change the comment to say
"rent_payment == 0 => immediately compressible for testing" or set
CreateMintParams { ... rent_payment: 1, ... } to trigger auto-decompression;
adjust the inline comment accordingly so CreateMintParams and its rent_payment
comment are consistent.
In `@forester/tests/test_compressible_pda.rs`:
- Around line 323-329: Wrap the calls to get_create_accounts_proof (which
internally calls get_validity_proof) in a retry loop at the four test call
sites: inside test_compressible_pda_bootstrap,
test_compressible_pda_compression, and both places in
test_compressible_pda_subscription; perform up to 3 attempts with a 5-second
backoff between attempts (use tokio::time::sleep(Duration::from_secs(5))). On
each attempt await get_create_accounts_proof and break on success; if it errors,
log or store the error and retry until attempts exhausted, then return or expect
the final error so the test still fails if all retries fail. Ensure you only
change the invocation sites (wrapping the existing call) and preserve the
original arguments and behavior.
In `@sdk-libs/client/src/indexer/options.rs`:
- Around line 116-158: GetCompressedMintsByAuthorityOptions currently leaves
authority_type as None which relies on server defaults; change the Rust default
so clients default to MintAuthorityType::Both. Update the Default implementation
or the GetCompressedMintsByAuthorityOptions::new() constructor to set
authority_type = Some(MintAuthorityType::Both) (and keep the existing Optional
type/signature), or adjust docs to explicitly state None means Both—preferably
set Some(MintAuthorityType::Both) in new()/Default to make client behavior
explicit; reference GetCompressedMintsByAuthorityOptions, new(), Default,
authority_type, and MintAuthorityType::Both when making the change.
In `@sdk-libs/client/src/local_test_validator.rs`:
- Around line 10-14: The upgradeable_programs field currently uses a tuple
Vec<(String, String, String)> which is unclear; replace it with a named struct
(e.g., UpgradeableProgramConfig with fields program_id, program_path,
upgrade_authority) and change the type of upgradeable_programs in
LightValidatorConfig to Vec<UpgradeableProgramConfig>, updating any construction
and pattern matching sites that reference upgradeable_programs or the tuple
order, while leaving sbf_programs as Vec<(String,String)> if desired.
In `@sdk-libs/program-test/src/indexer/test_indexer.rs`:
- Around line 999-1022: Replace the todo!() panics in get_compressed_mint,
get_compressed_mint_by_pda, and get_compressed_mints_by_authority with a
deterministic Err(IndexerError) return so callers get an error result instead of
a panic; follow the existing error pattern used elsewhere in the indexer (e.g.,
return an IndexerError variant such as UnimplementedOperation or a new
IndexerError::new("get_compressed_mint not implemented") consistent with how
other unimplemented RPCs are reported) and include the method name in the error
message to aid diagnostics.
| async fn get_compressed_mint( | ||
| &self, | ||
| _address: Address, | ||
| _config: Option<IndexerRpcConfig>, | ||
| ) -> Result<Response<Option<CompressedMint>>, IndexerError> { | ||
| todo!("get_compressed_mint not implemented") | ||
| } | ||
|
|
||
| async fn get_compressed_mint_by_pda( | ||
| &self, | ||
| _mint_pda: &Pubkey, | ||
| _config: Option<IndexerRpcConfig>, | ||
| ) -> Result<Response<Option<CompressedMint>>, IndexerError> { | ||
| todo!("get_compressed_mint_by_pda not implemented") | ||
| } | ||
|
|
||
| async fn get_compressed_mints_by_authority( | ||
| &self, | ||
| _authority: &Pubkey, | ||
| _options: Option<GetCompressedMintsByAuthorityOptions>, | ||
| _config: Option<IndexerRpcConfig>, | ||
| ) -> Result<Response<ItemsWithCursor<CompressedMint>>, IndexerError> { | ||
| todo!("get_compressed_mints_by_authority not implemented") | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid todo!() panics in compressed-mint stub methods.
These three trait-required methods will panic if called during testing or local development. Instead, return a deterministic error that allows test code to handle the unimplemented operation gracefully.
Replace todo!("...") with an appropriate IndexerError variant (e.g., a structured error or similar pattern used elsewhere in the indexer for unimplemented methods) so callers receive a proper error result rather than a panic.
🤖 Prompt for AI Agents
In `@sdk-libs/program-test/src/indexer/test_indexer.rs` around lines 999 - 1022,
Replace the todo!() panics in get_compressed_mint, get_compressed_mint_by_pda,
and get_compressed_mints_by_authority with a deterministic Err(IndexerError)
return so callers get an error result instead of a panic; follow the existing
error pattern used elsewhere in the indexer (e.g., return an IndexerError
variant such as UnimplementedOperation or a new
IndexerError::new("get_compressed_mint not implemented") consistent with how
other unimplemented RPCs are reported) and include the method name in the error
message to aid diagnostics.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
forester/src/compressible/ctoken/compressor.rs (1)
145-152: Verify the sentinel used fordelegate_indexwhen no delegate is present.
Line 145-152 usesunwrap_or(0). If0is not the explicit “none” sentinel forCompressAndCloseIndices, this can incorrectly point to the first packed account (often the output queue), causing instruction failure or misbinding.Run the following to confirm how
delegate_indexis interpreted and which sentinel is expected:#!/bin/bash # Inspect CompressAndCloseIndices and delegate_index usage rg -n "CompressAndCloseIndices|delegate_index" -S rg -n "compress_and_close|delegate_index" -S programs/ program-libs/ sdk-libs/ forester/ || true
🤖 Fix all issues with AI agents
In `@forester/src/compressible/mint/bootstrap.rs`:
- Around line 18-106: The bootstrap_mint_accounts function duplicates common
scaffolding (shutdown flag setup, reqwest client, localhost branching, logging)
found across ctoken/mint/pda bootstraps; extract that into a shared helper
(e.g., run_bootstrap or BootstrapRunner) that accepts rpc_url, program_id
(Pubkey), filters (Option<Vec<serde_json::Value>>), shutdown_rx
(Option<oneshot::Receiver<()>>), a processing closure (FnMut(RawAccountData) ->
bool) and a label for logs, and then call that helper from
bootstrap_mint_accounts while keeping only mint-specific bits
(MINT_ACCOUNT_TYPE_FILTER, ACCOUNT_TYPE_OFFSET, the process_account closure, and
calls to tracker.update_from_account) and replace direct calls to
bootstrap_standard_api/bootstrap_v2_api with the helper’s internal selection
logic; ensure the helper returns the same tuple shapes (pages/fetched/inserted)
so existing call sites can be adapted with minimal changes.
In `@forester/src/compressible/mint/state.rs`:
- Around line 18-32: The compressible-slot math in calculate_compressible_slot
is off-by-one: use the first unpaid epoch as the compressible boundary. Update
calculate_compressible_slot to compute compressible_slot as
(get_last_funded_epoch(...) + 1) * SLOTS_PER_EPOCH (referencing
calculate_compressible_slot, get_last_funded_epoch and SLOTS_PER_EPOCH) so
is_ready_to_compress (which checks current_slot > compressible_slot) only
returns true once the account has entered an unpaid epoch.
In `@forester/src/compressible/pda/bootstrap.rs`:
- Around line 99-106: The memcmp filter built for the discriminator (see
discriminator_base58, filters, and program_config.discriminator) omits an
explicit encoding field; update the JSON filter to include "encoding": "base58"
alongside "offset" and "bytes" so RPCs that require explicit encoding receive
base58, matching the pattern used in mint/bootstrap.rs.
In `@forester/src/compressible/pda/compressor.rs`:
- Around line 284-305: The batch transaction path logs a warning when
rpc.confirm_transaction(signature) returns false but still returns
Ok(signature), causing callers (like compress()) to treat it as success; update
the compress_accounts_idempotent flow so that when confirmed is false you return
an Err (e.g., anyhow::anyhow!("Batch transaction not confirmed: {}", signature))
instead of Ok(signature), keep the tracing::warn log, and ensure you only remove
entries from self.tracker when confirmed is true (no tracker changes on the
error path); use the same error crate/type used elsewhere in this file to match
existing error handling.
In `@forester/tests/test_compressible_mint.rs`:
- Around line 354-439: Replace the if/else that checks ready_accounts with a
precondition assert and then run the compression logic unconditionally: remove
the surrounding if !ready_accounts.is_empty() { ... } else { panic!(...) }
wrapper and add assert!(!ready_accounts.is_empty(), "Mint should be ready to
compress with rent_payment=0"); above the existing code that creates
MintCompressor::new(...), calls
compressor.compress_batch(&ready_accounts).await, polls
rpc.get_account(mint_pda), and exercises
rpc.get_compressed_account/get_compressed_mint/get_compressed_mint_by_pda so the
body no longer needs nesting or the else panic.
In `@forester/tests/test_compressible_pda.rs`:
- Around line 403-411: Extract the repeated polling pattern into a reusable
async helper like wait_for_condition(timeout: Duration, poll_interval: Duration,
condition: F) -> bool (as suggested) and replace the manual loops (e.g., the
loop that uses start = tokio::time::Instant::now(); while start.elapsed() <
timeout { if !tracker.is_empty() { ... }
sleep(Duration::from_millis(500)).await; }) with calls to that helper passing
the 60s timeout, 500ms interval and a closure that checks tracker.is_empty();
ensure the helper uses tokio::time::Instant and tokio::time::sleep so tests
remain async-compatible and update other occurrences (lines referenced) to call
wait_for_condition to remove duplication and centralize timeout behavior.
In `@program-libs/compressible/Cargo.toml`:
- Line 38: The solana-rent dependency in Cargo.toml is declared unconditionally
but is only used as a mock fallback when not on-chain (see the
#[cfg(not(all(...)))] guard in src/rent/mod.rs), so mark the dependency optional
by adding optional = true to the solana-rent entry in Cargo.toml while not tying
it to any feature flags (do not add it to the solana feature or other features);
keep all existing uses and cfg guards unchanged so solana-rent remains available
for tests/local dev but not forced for consumers.
In `@sdk-libs/client/src/local_test_validator.rs`:
- Around line 71-78: The loop that appends upgradeable program args (iterating
config.upgradeable_programs and calling path.push_str with
upgradeable_program.program_path) can break if paths contain spaces/special
chars; update the push to quote or shell-escape each argument (e.g., wrap
program_path/program_id/upgrade_authority in quotes or use a shell-escaping
helper like shell_escape::escape) so the constructed command string safely
preserves spaces and special characters, and ensure the module imports the
escape helper if you choose shell_escape.
♻️ Duplicate comments (7)
forester/tests/test_compressible_pda.rs (1)
323-329: Add retry/backoff aroundget_create_accounts_proofto avoid CI flakiness.This call to
get_create_accounts_proof(which internally callsget_validity_proof) is susceptible to transient indexer failures, as evidenced by the pipeline failure. The same issue exists at lines 511, 773, and 835.Consider wrapping these calls with retry logic (max 3 attempts, 5-second backoff) to improve CI stability. Based on learnings, CI should run batched tree transfer tests with retry logic due to known flakiness.
🔧 Suggested retry wrapper
async fn get_create_accounts_proof_with_retry<R: Rpc + Indexer>( rpc: &R, program_id: &Pubkey, inputs: Vec<CreateAccountsProofInput>, ) -> Result<CreateAccountsProofResult, Box<dyn std::error::Error>> { let max_attempts = 3; let mut last_error = None; for attempt in 1..=max_attempts { match get_create_accounts_proof(rpc, program_id, inputs.clone()).await { Ok(result) => return Ok(result), Err(e) => { println!("Attempt {}/{} failed: {:?}", attempt, max_attempts, e); last_error = Some(e); if attempt < max_attempts { sleep(Duration::from_secs(5)).await; } } } } Err(last_error.unwrap().into()) }sdk-libs/program-test/src/indexer/test_indexer.rs (1)
999-1023: Avoidtodo!()panics in compressed-mint stubs.These methods will panic if the new Indexer APIs are invoked during tests; return a deterministic
IndexerErrorinstead so callers can handle the unimplemented path.🔧 Proposed fix
@@ async fn get_compressed_mint( &self, _address: Address, _config: Option<IndexerRpcConfig>, ) -> Result<Response<Option<CompressedMint>>, IndexerError> { - todo!("get_compressed_mint not implemented") + Err(IndexerError::NotImplemented( + "get_compressed_mint not implemented".to_string(), + )) } @@ async fn get_compressed_mint_by_pda( &self, _mint_pda: &Pubkey, _config: Option<IndexerRpcConfig>, ) -> Result<Response<Option<CompressedMint>>, IndexerError> { - todo!("get_compressed_mint_by_pda not implemented") + Err(IndexerError::NotImplemented( + "get_compressed_mint_by_pda not implemented".to_string(), + )) } @@ async fn get_compressed_mints_by_authority( &self, _authority: &Pubkey, _authority_type: MintAuthorityType, _options: Option<GetCompressedMintsByAuthorityOptions>, _config: Option<IndexerRpcConfig>, ) -> Result<Response<ItemsWithCursor<CompressedMint>>, IndexerError> { - todo!("get_compressed_mints_by_authority not implemented") + Err(IndexerError::NotImplemented( + "get_compressed_mints_by_authority not implemented".to_string(), + )) }forester/src/lib.rs (1)
241-259: CToken bootstrap is skipped whenshutdown_bootstrapisNone.That leaves the ctokens tracker without historical state while PDA/Mint bootstraps still run. Spawn the task unconditionally and pass the optional shutdown receiver through.
🔧 Suggested fix
@@ - // Spawn bootstrap task for ctokens - if let Some(shutdown_bootstrap_rx) = shutdown_bootstrap { - let tracker_clone = ctoken_tracker.clone(); - let rpc_url = config.external_services.rpc_url.clone(); - - tokio::spawn(async move { - if let Err(e) = compressible::bootstrap_ctoken_accounts( - rpc_url, - tracker_clone, - Some(shutdown_bootstrap_rx), - ) - .await - { - tracing::error!("CToken bootstrap failed: {:?}", e); - } else { - tracing::info!("CToken bootstrap complete"); - } - }); - } + // Spawn bootstrap task for ctokens + let tracker_clone = ctoken_tracker.clone(); + let rpc_url = config.external_services.rpc_url.clone(); + let shutdown_bootstrap_rx = shutdown_bootstrap; + + tokio::spawn(async move { + if let Err(e) = compressible::bootstrap_ctoken_accounts( + rpc_url, + tracker_clone, + shutdown_bootstrap_rx, + ) + .await + { + tracing::error!("CToken bootstrap failed: {:?}", e); + } else { + tracing::info!("CToken bootstrap complete"); + } + });forester/src/compressible/mint/compressor.rs (1)
208-211: Prevent panic whenmax_concurrent == 0.
buffer_unordered(0)will panic; clamp or validate before calling.🔧 Proposed fix
- let results: Vec<_> = futures::stream::iter(compression_futures) - .buffer_unordered(max_concurrent) + let concurrency = max_concurrent.max(1); + let results: Vec<_> = futures::stream::iter(compression_futures) + .buffer_unordered(concurrency)forester/src/compressible/pda/compressor.rs (1)
165-168: Guardmax_concurrentagainst zero beforebuffer_unordered.
Line 165-168 can panic ifmax_concurrent == 0; this was already flagged earlier.forester/src/epoch_manager.rs (1)
2035-2055: Redundant tracker removal aftercompress_batch_concurrent.This was flagged in a prior review: both
PdaCompressor::compress_batch_concurrentandMintCompressor::compress_batch_concurrentalready remove successful accounts from their trackers internally. The explicitpda_tracker.remove(&account_state.pubkey)(line 2043) andmint_tracker.remove(&mint_state.pubkey)(line 2115) calls duplicate that work, causing unnecessary DashMap writes.Consider letting only the compressor implementation own removal, leaving these match arms to just log and update counters.
Also applies to: 2110-2123
forester/tests/test_compressible_mint.rs (1)
91-104: Comment contradicts actualrent_paymentvalue.Line 91 states "CreateMint auto-decompresses (rent_payment > 0)" but line 102 sets
rent_payment: 0. This inconsistency was flagged previously. Either update the comment to reflect thatrent_payment=0means "immediately compressible for testing" or adjust the value if decompression behavior differs.📝 Suggested fix
- // Build params - CreateMint auto-decompresses (rent_payment > 0) + // Build params - rent_payment=0 makes mint immediately compressible for testing let params = CreateMintParams {
| let timeout = Duration::from_secs(60); | ||
|
|
||
| while start.elapsed() < timeout { | ||
| if !tracker.is_empty() { | ||
| println!("Bootstrap found {} PDA accounts", tracker.len()); | ||
| break; | ||
| } | ||
| sleep(Duration::from_millis(500)).await; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Consider extracting the polling timeout pattern.
The 60-second timeout with 500ms polling interval appears in multiple places (here, line 814, line 875, line 942). Consider extracting this into a helper to reduce duplication and ensure consistent timeout behavior across tests.
async fn wait_for_condition<F, Fut>(
timeout: Duration,
poll_interval: Duration,
mut condition: F,
) -> bool
where
F: FnMut() -> Fut,
Fut: std::future::Future<Output = bool>,
{
let start = tokio::time::Instant::now();
while start.elapsed() < timeout {
if condition().await {
return true;
}
sleep(poll_interval).await;
}
false
}🤖 Prompt for AI Agents
In `@forester/tests/test_compressible_pda.rs` around lines 403 - 411, Extract the
repeated polling pattern into a reusable async helper like
wait_for_condition(timeout: Duration, poll_interval: Duration, condition: F) ->
bool (as suggested) and replace the manual loops (e.g., the loop that uses start
= tokio::time::Instant::now(); while start.elapsed() < timeout { if
!tracker.is_empty() { ... } sleep(Duration::from_millis(500)).await; }) with
calls to that helper passing the 60s timeout, 500ms interval and a closure that
checks tracker.is_empty(); ensure the helper uses tokio::time::Instant and
tokio::time::sleep so tests remain async-compatible and update other occurrences
(lines referenced) to call wait_for_condition to remove duplication and
centralize timeout behavior.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@forester/src/compressible/bootstrap_helpers.rs`:
- Around line 117-123: Replace uses of the deprecated base64::decode(...) with
the engine-based API so the code compiles under base64 v0.20+; for example,
change the call that assigns data from base64::decode(data_str) (the variable
named data in bootstrap_helpers.rs) to use
base64::engine::general_purpose::STANDARD.decode(data_str) (or import
base64::Engine and call STANDARD.decode(data_str)), preserve the existing error
handling and debug log that references pubkey, and apply the same refactor to
other occurrences in subscriber.rs and tree_data_sync.rs so all base64 decoding
uses the engine-based API.
In `@forester/src/compressible/pda/compressor.rs`:
- Around line 78-103: The code currently deserializes the config with
LightConfig::try_from_slice (skipping SDK checks) and uses a hardcoded
derive_pda(program_id, 0); replace the raw deserialization with the SDK
validator by calling LightConfig::load_checked on the fetched config account
(the account returned for config_pda) so the SDK can enforce config_bump == 0,
verify the PDA derivation matches config_pda, and apply any other constraints;
keep using LightConfig::derive_pda(program_id, 0) to locate config_pda, fetch
the account via rpc.get_account(...) as you already do, then pass that account
into LightConfig::load_checked and propagate any error instead of using
LightConfig::try_from_slice.
♻️ Duplicate comments (2)
forester/src/compressible/pda/compressor.rs (1)
165-169: Potential runtime panic withbuffer_unordered(0).If
max_concurrentis passed as0,buffer_unordered(0)will panic at runtime. This value can originate from configuration (e.g., environment variables), so defensive clamping is warranted.🔧 Proposed fix
// Execute concurrently with limit + let concurrency = max_concurrent.max(1); let results: Vec<_> = futures::stream::iter(compression_futures) - .buffer_unordered(max_concurrent) + .buffer_unordered(concurrency) .collect() .await;forester/tests/test_compressible_mint.rs (1)
92-104: Comment doesn't match the code behavior.Line 92 says "CreateMint auto-decompresses (rent_payment > 0)" but line 103 sets
rent_payment: 0. The code works correctly for testing (immediately compressible), but the comment is misleading.📝 Proposed fix
- // Build params - CreateMint auto-decompresses (rent_payment > 0) + // Build params - rent_payment=0 makes the mint immediately compressible for testing let params = CreateMintParams {
| let data = match base64::decode(data_str) { | ||
| Ok(bytes) => bytes, | ||
| Err(e) => { | ||
| debug!("Failed to decode base64 for account {}: {:?}", pubkey, e); | ||
| return None; | ||
| } | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
🧩 Analysis chain
🏁 Script executed:
cat forester/Cargo.toml | grep -A 5 "base64"Repository: Lightprotocol/light-protocol
Length of output: 323
🏁 Script executed:
cat -n forester/src/compressible/bootstrap_helpers.rs | sed -n '110,130p'Repository: Lightprotocol/light-protocol
Length of output: 705
🏁 Script executed:
find . -name "CLAUDE.md" -type fRepository: Lightprotocol/light-protocol
Length of output: 645
🏁 Script executed:
head -100 Cargo.toml | grep -A 10 "\[workspace"Repository: Lightprotocol/light-protocol
Length of output: 745
🏁 Script executed:
cat Cargo.toml | grep -B 5 -A 5 "base64"Repository: Lightprotocol/light-protocol
Length of output: 320
🏁 Script executed:
cat CLAUDE.md | head -150Repository: Lightprotocol/light-protocol
Length of output: 7261
🏁 Script executed:
cat forester/CLAUDE.mdRepository: Lightprotocol/light-protocol
Length of output: 122
🏁 Script executed:
grep -r "base64::" forester/src/ | head -20Repository: Lightprotocol/light-protocol
Length of output: 382
🏁 Script executed:
cat Cargo.toml | grep -A 2 'base64 = "0.13"'Repository: Lightprotocol/light-protocol
Length of output: 121
Consider this refactoring when upgrading to base64 0.20+.
The base64::decode API is still the standard in the current version (0.13), but it was deprecated in base64 v0.20+. When you eventually upgrade the dependency, update to the engine-based API:
- let data = match base64::decode(data_str) {
+ use base64::{Engine as _, engine::general_purpose::STANDARD};
+ let data = match STANDARD.decode(data_str) {Apply the same pattern to other instances in subscriber.rs and tree_data_sync.rs for consistency.
🤖 Prompt for AI Agents
In `@forester/src/compressible/bootstrap_helpers.rs` around lines 117 - 123,
Replace uses of the deprecated base64::decode(...) with the engine-based API so
the code compiles under base64 v0.20+; for example, change the call that assigns
data from base64::decode(data_str) (the variable named data in
bootstrap_helpers.rs) to use
base64::engine::general_purpose::STANDARD.decode(data_str) (or import
base64::Engine and call STANDARD.decode(data_str)), preserve the existing error
handling and debug log that references pubkey, and apply the same refactor to
other occurrences in subscriber.rs and tree_data_sync.rs so all base64 decoding
uses the engine-based API.
c499bd8 to
0093038
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
forester/src/compressible/ctoken/state.rs (1)
93-101: Avoid leaving stale state when slot calculation fails.
On failure you now returnOk(())without updating or removing the existing entry, so a previously tracked account can remain with an outdatedcompressible_slotand still be considered for compression. Consider removing the entry (or bubbling the error) to avoid acting on stale data.🛠️ Suggested fix (remove stale entry on failure)
Err(e) => { warn!( "Failed to calculate compressible slot for {}: {}", pubkey, e ); + self.remove(&pubkey); return Ok(()); }forester/src/compressible/bootstrap_helpers.rs (1)
223-225: Consider extending localhost detection for edge cases.The current check covers the common cases (
localhost,127.0.0.1), but you may want to also handle:
0.0.0.0(sometimes used for local binding)[::1]or::1(IPv6 localhost)This is a minor edge case—the current implementation works for typical local development setups.
forester/tests/test_compressible_mint.rs (1)
321-343: Preferbuild_expected_minthere to keep assertions consistent.
This reduces duplication and keeps the expected‑state logic centralized.♻️ Suggested refactor
- // Build expected Mint - let expected_mint = Mint { - base: BaseMint { - mint_authority: Some(payer.pubkey().to_bytes().into()), - supply: 0, - decimals: 9, - is_initialized: true, - freeze_authority: None, - }, - metadata: MintMetadata { - version: metadata_version, - mint_decompressed: true, - mint: mint_pda.to_bytes().into(), - mint_signer: mint_seed.pubkey().to_bytes(), - bump, - }, - reserved: [0u8; 16], - account_type: ACCOUNT_TYPE_MINT, - compression, - extensions: None, - }; + // Build expected Mint + let expected_mint = build_expected_mint( + &payer.pubkey(), + 9, + &mint_pda, + &mint_seed.pubkey().to_bytes(), + bump, + metadata_version, + compression, + );
🤖 Fix all issues with AI agents
In `@forester/src/compressible/bootstrap_helpers.rs`:
- Around line 399-405: The spawned shutdown listener (tokio::spawn using
rx.await and shutdown_flag_clone.store) can leak if run_bootstrap returns early;
change the approach to either (1) race the bootstrap work against the shutdown
signal using tokio::select! (pass shutdown_rx into
bootstrap_standard_api/bootstrap_v2_api and check it there) so no background
task is spawned, or (2) keep the JoinHandle from tokio::spawn and explicitly
abort or await it before run_bootstrap returns (drop/abort the handle on all
return paths) to ensure the spawned task is cleaned up; update the code paths
that currently use the shutdown_rx + shutdown_flag_clone + rx.await spawn block
accordingly.
In `@forester/src/compressible/mint/compressor.rs`:
- Around line 153-170: The code currently pushes every chunk's signature into
signatures regardless of `confirmed`; update the logic in the function handling
the batching (the loop that uses `confirmed`, `chunk`, `chunk_idx`, `signature`,
and `self.tracker.remove`) so that you only push `signature` into the
`signatures` Vec when `confirmed` is true (i.e., move the
`signatures.push(signature)` inside the `if confirmed { ... }` block and keep
`self.tracker.remove(&mint_state.pubkey)` behavior unchanged), ensuring only
confirmed signatures are returned to callers.
In `@forester/tests/test_compressible_ctoken.rs`:
- Around line 528-532: The current loop uses .expect(&format!(...)) which
allocates even on success; change the lookup chain (the iterator .find(...)
calling .expect(...)) to defer allocation by replacing the expect/format call
with unwrap_or_else that panics only on the failure path (e.g., use
.unwrap_or_else(|| panic!("Bootstrap should have found account {}", pubkey))).
Update the code around expected_pubkeys, accounts, and account_state to use this
unwrap_or_else approach so the formatted string is only created when the lookup
fails.
In `@program-libs/CLAUDE.md`:
- Around line 63-72: Add a blank line between the "## Testing" heading and the
fenced code block so the triple-backtick block is preceded by an empty line;
locate the "## Testing" section in CLAUDE.md and insert a single blank line
before the code fence (the ``` bash block containing the three cargo test lines)
to follow Markdown best practices.
- Around line 1-4: Replace the current paragraph summary in CLAUDE.md with a 2–5
bullet "Summary" section, and add "Used In" and "Navigation" headings with brief
bullet lists; keep the existing high-level description but reformat it into the
header structure ("## Summary", "## Used In", "## Navigation", plus any existing
high-level sections) so the file matches the CLAUDE.md template and provides
clear links to on-chain programs, sdk-libs, and relevant tests or crate docs.
♻️ Duplicate comments (3)
forester/src/compressible/pda/compressor.rs (1)
174-178: Guard againstmax_concurrent == 0to preventbuffer_unorderedpanic.This was flagged in a previous review and addressed in
mint/compressor.rs(lines 191-198), but the same fix is missing here.buffer_unordered(0)panics at runtime.🔧 Proposed fix
+ // Guard against max_concurrent == 0 to avoid buffer_unordered panic + if max_concurrent == 0 { + return account_states + .iter() + .cloned() + .map(|state| Err((state, anyhow::anyhow!("max_concurrent must be > 0")))) + .collect(); + } + // Execute concurrently with limit let results: Vec<_> = futures::stream::iter(compression_futures) .buffer_unordered(max_concurrent),
sdk-libs/client/src/local_test_validator.rs (1)
71-77: Consider shell-escaping upgradeable program args.If any path contains spaces/special characters,
sh -cwill misparse the command line. This is low-risk for test infra but still worth hardening if paths can vary.sdk-libs/program-test/src/indexer/test_indexer.rs (1)
999-1023: Replacetodo!()with deterministic errors to avoid test panics.
These stub methods will panic if called; return anIndexerErrorinstead so callers can handle unimplemented paths.✅ Safer stub implementation
async fn get_compressed_mint( &self, _address: Address, _config: Option<IndexerRpcConfig>, ) -> Result<Response<Option<CompressedMint>>, IndexerError> { - todo!("get_compressed_mint not implemented") + Err(IndexerError::CustomError( + "get_compressed_mint not implemented".to_string(), + )) } async fn get_compressed_mint_by_pda( &self, _mint_pda: &Pubkey, _config: Option<IndexerRpcConfig>, ) -> Result<Response<Option<CompressedMint>>, IndexerError> { - todo!("get_compressed_mint_by_pda not implemented") + Err(IndexerError::CustomError( + "get_compressed_mint_by_pda not implemented".to_string(), + )) } async fn get_compressed_mints_by_authority( &self, _authority: &Pubkey, _authority_type: MintAuthorityType, _options: Option<GetCompressedMintsByAuthorityOptions>, _config: Option<IndexerRpcConfig>, ) -> Result<Response<ItemsWithCursor<CompressedMint>>, IndexerError> { - todo!("get_compressed_mints_by_authority not implemented") + Err(IndexerError::CustomError( + "get_compressed_mints_by_authority not implemented".to_string(), + )) }
7671bfc to
e2663c4
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
program-libs/CLAUDE.md (1)
68-70: Include--all-featuresfor light-batched-merkle-tree tests.The testing guidance here should match the standard unit test command set for program-libs; batched-merkle-tree also needs
--all-features. As per coding guidelines, keep CLAUDE.md accurate. Based on learnings, update the command.✅ Proposed doc fix
-cargo test -p light-batched-merkle-tree +cargo test -p light-batched-merkle-tree --all-featuresforester/src/compressible/pda/compressor.rs (1)
227-243: Consider adding error context for individual account failures in parallel fetch.When
try_join_allfails, you lose visibility into which specific compressed account caused the failure. For debugging larger batches, capturing which address failed would be helpful.♻️ Optional improvement
let compressed_account_futures = compressed_addresses.iter().map(|addr| { let rpc_clone = self.rpc_pool.clone(); let addr = *addr; async move { let rpc = rpc_clone.get_connection().await?; rpc.get_compressed_account(addr, None) .await - .map_err(|e| anyhow::anyhow!("Failed to get compressed account: {:?}", e))? + .map_err(|e| anyhow::anyhow!("Failed to get compressed account {:?}: {:?}", addr, e))? .value - .ok_or_else(|| anyhow::anyhow!("Compressed account not found: {:?}", addr)) + .ok_or_else(|| anyhow::anyhow!("Compressed account not found for address: {:?}", addr)) } });forester/src/compressible/ctoken/state.rs (1)
93-102: Silent failure on compressible slot calculation may mask upstream issues.Changing from error propagation to
warn!+Ok(())means callers won't know when accounts fail to be tracked due to missing extensions or invalid rent data. This is fine for graceful degradation, but consider whether you want metrics or alerting on these "soft failures" in production—especially if many accounts start hitting this path unexpectedly.If this is intentional (e.g., you expect some accounts to lack the Compressible extension), a brief inline comment documenting that expectation would help future readers.
🤖 Fix all issues with AI agents
In `@forester/package.json`:
- Line 6: The package.json "build" npm script currently runs "cargo build" which
generates artifacts under target/debug, but NX expects outputs at
{workspaceRoot}/target/release; update the "build" script to run cargo build
--release (i.e., change the "build" script entry in package.json) so outputs
land in target/release, or alternatively change the NX configuration outputs to
point to {workspaceRoot}/target/debug to match the existing "build"
script—modify either the package.json "build" script or the NX outputs setting
to make them consistent.
In `@forester/src/compressible/bootstrap_helpers.rs`:
- Around line 415-473: Both branches of the RPC bootstrap (the is_localhost
branch calling bootstrap_standard_api and the else branch calling
bootstrap_v2_api) repeat the same post-call steps: aborting
shutdown_listener_handle, unwrapping the api_result, logging with label, and
constructing a BootstrapResult; extract that shared logic into a small helper or
closure (e.g., a function handle_bootstrap_result) that takes the api_result (or
its Ok tuple) and label and performs handle.abort(), info! logging, and
BootstrapResult construction, then call that helper from both places to remove
duplication while keeping bootstrap_standard_api and bootstrap_v2_api calls
intact.
In `@forester/tests/test_compressible_mint.rs`:
- Around line 759-767: The test currently finds a mint by comparing
m.mint.decimals == 9 which can collide with other mints; update the lookup in
the mints_by_authority query to use a unique identifier (e.g., compare m.address
or m.mint.compression_address against the known compression_address_1) so the
assert targets the specific mint under test (adjust the variable referenced in
the closure used by found_mint and keep the subsequent assert the same).
In `@sdk-libs/client/src/indexer/types.rs`:
- Around line 1058-1061: The doc comment for the struct field `extensions` is
incorrect: it states "Serialized extensions (base64 encoded)" but the field
`extensions: Option<Vec<u8>>` actually contains decoded bytes (base64 decoding
occurs in the `TryFrom` implementation). Update the comment on `extensions` in
types.rs to reflect that it stores decoded/serialized bytes (e.g., "Serialized
extensions (decoded bytes)") and, if helpful, add a brief note pointing to the
`TryFrom` implementation that performs base64 decoding.
In `@sdk-libs/photon-api/src/models/_get_compressed_mint_post_request_params.rs`:
- Around line 22-28: The current GetCompressedMintPostRequestParams::new()
constructs an instance with both address and mint_pda as None which allows
invalid requests; replace it with a checked constructor that enforces exactly
one selector is set (either address or mint_pda). Implement either two explicit
constructors (e.g., GetCompressedMintPostRequestParams::with_address(address:
String) and ::with_mint_pda(mint_pda: String)) or a single checked factory
(e.g., ::try_new(address: Option<String>, mint_pda: Option<String>) ->
Result<Self, Error>) that returns an error if both or neither are provided;
update callers to use the new constructors and remove/privatize the unsafe
public new() to prevent creating malformed params.
♻️ Duplicate comments (2)
forester/src/compressible/pda/compressor.rs (1)
176-180:buffer_unordered(0)will panic at runtime ifmax_concurrentis zero.This issue was flagged in a previous review and remains unaddressed. If
max_concurrent(derived frommax_concurrent_batchesconfig) is set to 0 via environment variable, this will cause a runtime panic. Guard with.max(1)before passing tobuffer_unordered().🔧 Proposed fix
// Execute concurrently with limit + let concurrency = max_concurrent.max(1); let results: Vec<_> = futures::stream::iter(compression_futures) - .buffer_unordered(max_concurrent) + .buffer_unordered(concurrency) .collect() .await;sdk-libs/program-test/src/indexer/test_indexer.rs (1)
999-1023: Avoidtodo!()panics for the new compressed-mint methods. These should return a deterministicErr(IndexerError::...)so tests can handle the unimplemented path gracefully instead of crashing.
sdk-libs/photon-api/src/models/_get_compressed_mint_post_request_params.rs
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
forester/src/compressible/ctoken/state.rs (1)
93-101: Avoid stale tracker entries when compressible-slot calc fails.
ReturningOk(())without updating/removing leaves any previously tracked state in place, which can schedule compression at the wrong slot. Consider removing the existing entry (or propagate the error) on failure.🔧 Suggested fix
Err(e) => { warn!( "Failed to calculate compressible slot for {}: {}", pubkey, e ); + // Avoid stale entries if this account was previously tracked. + self.remove(&pubkey); return Ok(()); }forester/tests/test_compressible_ctoken.rs (2)
100-156: Add a timeout around registration/active phase waits to avoid long hangs.When
current_slotis already in active phase, the new logic waits for the next epoch’s registration start with no upper bound. That can stall CI for a long time if epochs are long or slot progress is slow. A bounded wait (with a clear error) keeps the test deterministic.
Consider guarding the twowhile rpc.get_slot() < ...loops with a timeout and failing fast if exceeded.🔧 Suggested timeout guard
+ let wait_deadline = tokio::time::Instant::now() + Duration::from_secs(60); // Wait for registration phase while rpc.get_slot().await? < register_phase_start { + if tokio::time::Instant::now() > wait_deadline { + return Err("Timed out waiting for registration phase".into()); + } sleep(Duration::from_millis(400)).await; } + let wait_deadline = tokio::time::Instant::now() + Duration::from_secs(60); // Wait for active phase while rpc.get_slot().await? < active_phase_start { + if tokio::time::Instant::now() > wait_deadline { + return Err("Timed out waiting for active phase".into()); + } sleep(Duration::from_millis(400)).await; }
53-67: Mitigate airdrop rate-limit flakiness with retry/backoff.CI already failed with “airdrop lamports request failed (rate limit)”. Add a small retry helper with backoff to make these tests robust.
🔧 Example retry helper
+async fn airdrop_with_retry<R: Rpc>( + rpc: &mut R, + pubkey: &Pubkey, + lamports: u64, +) -> Result<(), Box<dyn std::error::Error>> { + let mut delay = Duration::from_millis(200); + for attempt in 1..=5 { + match rpc.airdrop_lamports(pubkey, lamports).await { + Ok(_) => return Ok(()), + Err(e) if attempt < 5 => { + tracing::warn!("Airdrop failed (attempt {}): {:?}. Retrying...", attempt, e); + sleep(delay).await; + delay *= 2; + } + Err(e) => return Err(e.into()), + } + } + Ok(()) +}- rpc.airdrop_lamports(&governance_pubkey, 1_000_000_000) - .await?; + airdrop_with_retry(rpc, &governance_pubkey, 1_000_000_000).await?; - rpc.airdrop_lamports(&forester_pubkey, 10_000_000_000) - .await?; + airdrop_with_retry(rpc, &forester_pubkey, 10_000_000_000).await?;- rpc.airdrop_lamports(&payer.pubkey(), 10_000_000_000) - .await - .expect("Failed to airdrop lamports"); + airdrop_with_retry(&mut rpc, &payer.pubkey(), 10_000_000_000) + .await + .expect("Failed to airdrop lamports");Also applies to: 231-233
🤖 Fix all issues with AI agents
In `@forester/tests/test_compressible_mint.rs`:
- Around line 398-419: The test currently treats a mint as closed only if
rpc.get_account(mint_pda) returns None; update the loop that queries get_account
(variable mint_after) to also treat Some(account) with account.lamports == 0 as
closed (set account_closed = true and break) so zero‑lamport leftover accounts
are accepted as closed (same pattern used in the ctoken test).
In `@sdk-libs/photon-api/src/models/compressed_mint_list.rs`:
- Around line 11-19: The CompressedMintList struct derives Serialize and
Deserialize but the serde traits are not imported; add an explicit import for
serde::Serialize and serde::Deserialize at the top of the file so the derives on
the CompressedMintList struct (and its fields like items:
Vec<models::CompressedMint>, cursor: Option<String>) compile correctly.
♻️ Duplicate comments (2)
sdk-libs/client/src/local_test_validator.rs (1)
71-76: Consider shell-escaping upgradeable program args.
This is still vulnerable to spaces/special chars in paths. Quoting orshell_escapewould make the command safer.🔧 Example quoting
- path.push_str(&format!( - " --upgradeable-program {} {} {}", - upgradeable_program.program_id, - upgradeable_program.program_path, - upgradeable_program.upgrade_authority - )); + path.push_str(&format!( + " --upgradeable-program {} \"{}\" {}", + upgradeable_program.program_id, + upgradeable_program.program_path, + upgradeable_program.upgrade_authority + ));sdk-libs/program-test/src/indexer/test_indexer.rs (1)
999-1022: Avoid todo!() panics in the new compressed mint stubs.These will still panic if invoked; return a deterministic IndexerError instead so tests can handle unimplemented behavior gracefully.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
forester/tests/test_compressible_mint.rs (2)
322-341: Consider usingbuild_expected_minthelper for consistency.You've defined the
build_expected_minthelper at lines 26-58 and use it intest_compressible_mint_bootstrap, but here you're constructing the expected Mint inline. Using the helper keeps things DRY and ensures any future changes to Mint structure only need updating in one place.♻️ Suggested refactor
- // Build expected Mint - let expected_mint = Mint { - base: BaseMint { - mint_authority: Some(payer.pubkey().to_bytes().into()), - supply: 0, - decimals: 9, - is_initialized: true, - freeze_authority: None, - }, - metadata: MintMetadata { - version: metadata_version, - mint_decompressed: true, - mint: mint_pda.to_bytes().into(), - mint_signer: mint_seed.pubkey().to_bytes(), - bump, - }, - reserved: [0u8; 16], - account_type: ACCOUNT_TYPE_MINT, - compression, - extensions: None, - }; + // Build expected Mint using helper + let expected_mint = build_expected_mint( + &payer.pubkey(), + 9, + &mint_pda, + &mint_seed.pubkey().to_bytes(), + bump, + metadata_version, + compression, + );
641-652: Same zero-lamport handling needed here.Consistent with the other test, this loop should also treat zero-lamport accounts as closed.
🔧 Suggested fix
while start.elapsed() < timeout { let mint_after = rpc .get_account(mint_pda_1) .await .expect("Failed to query mint account"); - if mint_after.is_none() { + if mint_after.is_none() + || mint_after.as_ref().map(|a| a.lamports) == Some(0) + { account_closed = true; println!("First mint account closed successfully!"); break; } sleep(Duration::from_millis(500)).await; }
♻️ Duplicate comments (1)
forester/tests/test_compressible_mint.rs (1)
403-414: Handle zero-lamport accounts as "closed".A past review noted that compression might leave a zero-lamport account rather than deleting it outright. The current check only treats
Noneas closed. Treatinglamports == 0as closed too makes the test more robust.🔧 Suggested fix
while start.elapsed() < timeout { let mint_after = rpc .get_account(mint_pda) .await .expect("Failed to query mint account"); - if mint_after.is_none() { + if mint_after.is_none() + || mint_after.as_ref().map(|a| a.lamports) == Some(0) + { account_closed = true; println!("Mint account closed successfully!"); break; } sleep(Duration::from_millis(500)).await; }
4844a79 to
5268a7f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
forester/tests/test_compressible_ctoken.rs (1)
94-160: Add a bounded wait for phase transitions to prevent hangs.With target_epoch potentially rolling forward, the two “wait for phase” loops can stall for an unbounded duration if phase timing drifts or the local validator stalls. Please add a maximum wait and fail fast with a clear error so CI doesn’t hang indefinitely.
🤖 Fix all issues with AI agents
In `@forester/src/compressible/mint/bootstrap.rs`:
- Around line 46-61: Update the final info log to include the number of
successfully inserted accounts so failures are visible: when logging the
bootstrap summary after run_bootstrap returns, include result.inserted alongside
tracker.len() and result.fetched (the closure that processes accounts is
process_account, and the bootstrap call is run_bootstrap), so the message
reports total tracked, fetched, inserted, and pages to aid debugging in cases
where process_account may return false.
♻️ Duplicate comments (4)
sdk-libs/photon-api/src/models/compressed_mint_list.rs (1)
11-19: Add missing serde derives import (compile error otherwise).
If there isn’t a crate-level#[macro_use] extern crate serde;or module prelude, these derives won’t resolve.🔧 Proposed fix
use crate::models; +use serde::{Deserialize, Serialize};Run this to confirm whether serde derive macros are already in scope via crate-wide imports:
#!/bin/bash # Check for macro_use or re-exports that would make Serialize/Deserialize available rg -n "macro_use.*serde|extern crate serde|pub use serde::\{.*Serialize.*Deserialize.*\}" sdk-libs/photon-apisdk-libs/client/src/indexer/types.rs (1)
1058-1061: Fixextensionsdoc comment to match decoded bytes.The field holds decoded bytes (base64 is decoded in the
TryFromimplementation), so the current wording is misleading.📝 Suggested doc comment tweak
- /// Serialized extensions (base64 encoded) + /// Serialized extensions (decoded bytes from the API's base64 representation) pub extensions: Option<Vec<u8>>,forester/tests/test_compressible_mint.rs (1)
403-413: Treat zero‑lamport mint accounts as closed.Some compression paths can leave a zero‑lamport account instead of
None; the current polling can falsely fail. Consider applying the same check in the subscription test loop around Line 646 as well.🔧 Suggested update
- if mint_after.is_none() { + if mint_after.is_none() + || mint_after.as_ref().map(|a| a.lamports) == Some(0) + { account_closed = true; println!("Mint account closed successfully!"); break; }- if mint_after.is_none() { + if mint_after.is_none() + || mint_after.as_ref().map(|a| a.lamports) == Some(0) + { account_closed = true; println!("First mint account closed successfully!"); break; }Also applies to: 641-649
sdk-libs/program-test/src/indexer/test_indexer.rs (1)
999-1022: Avoidtodo!()panics in compressed-mint stubs.Same concern as prior review: return a deterministic
Err(IndexerError)so tests can handle unimplemented calls without panicking.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🤖 Fix all issues with AI agents
In `@forester/tests/test_indexer_interface.rs`:
- Line 183: Calls to wait_for_indexer are using needless mutable borrows
(wait_for_indexer(&mut rpc)); change those call sites to pass an immutable
reference (wait_for_indexer(&rpc)) since wait_for_indexer only requires
&RpcClient (or equivalent) and does not mutate rpc. Update all occurrences
(e.g., the call at wait_for_indexer(&mut rpc) and the similar calls around lines
noted) so they use &rpc, and remove any unnecessary mut on local variable
declarations only if no other uses require mutation.
In `@sdk-libs/client/src/indexer/types.rs`:
- Around line 1244-1300: The TokenAccountInterface TryFrom implementation
duplicates AccountInterface construction; replace the manual reconstruction in
impl TryFrom<&photon_api::models::TokenAccountInterface> for
TokenAccountInterface (the block building `account: AccountInterface { ... }`)
by calling AccountInterface::try_from(&tai.account) and propagate errors, then
use the returned AccountInterface when building TokenAccountInterface; ensure
you keep the existing token parsing logic (TokenData construction) unchanged and
adjust imports/traits if needed to resolve TryFrom conversions.
- Around line 1311-1341: The MintInterface TryFrom implementation duplicates
AccountInterface construction; replace the manual reconstruction block that
builds AccountInterface (address, lamports, owner, data, executable, rent_epoch,
resolved_from, resolved_slot, compressed_context) with a call to
AccountInterface::try_from(&mi.account)? to reuse the existing conversion logic,
then keep the existing MintData::try_from(&mi.mint_data)? and return
Ok(MintInterface { account, mint_data }); ensure any now-unused local variables
are removed and necessary trait/imports for TryFrom are present.
In `@sdk-libs/photon-api/src/models/_get_token_account_interface_post_request.rs`:
- Around line 27-33: The constructor GetTokenAccountInterfacePostRequest::new
currently sets a hardcoded id ("test-id") which breaks JSON-RPC response
correlation; change it to generate a unique UUID v4 string for the id (use
uuid::Uuid::new_v4().to_string()) when building the struct (inside
GetTokenAccountInterfacePostRequest::new where id is set alongside jsonrpc,
method, params) and update any other similar constructors (e.g.,
GetAccountInterfacePostRequest::new, GetMintInterfacePostRequest::new) to use
the same UUID v4 pattern so each request gets a unique id.
sdk-libs/photon-api/src/models/_get_token_account_interface_post_request.rs
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@sdk-libs/photon-api/src/models/cold_data.rs`:
- Around line 13-25: The discriminator field currently allows any length
(Vec<u8>) which contradicts the “first 8 bytes” doc; change
ColdData::discriminator to a fixed-size [u8; 8] (update the struct type and the
ColdData::new signature to accept [u8; 8]) so invalid-length payloads are
impossible, and update call sites to construct or convert into [u8; 8] (or,
alternatively, if you must accept variable input, add explicit validation inside
ColdData::new that checks discriminator.len() == 8 and return a Result or panic
on mismatch); refer to the ColdData struct and impl ColdData::new to make these
changes.
- Implemented `get_compressed_mint` and `get_compressed_mint_by_pda` methods in the PhotonIndexer. - Added `get_compressed_mints_by_authority` method to retrieve compressed mints based on authority. - Introduced new data structures: `CompressedMint`, `MintData`, and `CompressedMintList` to handle compressed mint data. - Updated Photon API with new request and response models for compressed mint operations. - Enhanced error handling for API responses related to compressed mints. - Updated tests to cover new functionality for compressed mint retrieval.
…files; refactor mint compression logic to handle batching and improve error handling; enhance test cases for mint creation and compression
…ons to remove Option wrapper and set default value
…ction to use into() for conversion
…int compression test with improved assertions and logging
2c976fc to
cf0f163
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
sdk-libs/client/src/interface/account_interface_ext.rs (1)
238-239: TODO noted: concurrent fetching.The sequential loop with
TODO: concurrent with futuresis fine for now, but for production performance with many accounts, you'll want to usefutures::future::join_allor similar to parallelize the RPC calls. Happy to help sketch that out when you're ready.forester/tests/test_compressible_mint.rs (1)
313-315: Minor: Redundant clone before unwrap.The
clone()on line 314 is unnecessary since you immediatelyunwrap()it. You can simplify this.♻️ Suggested simplification
- let mint_data = mint_account.clone().unwrap(); + let mint_data = mint_account.unwrap();
🤖 Fix all issues with AI agents
In `@forester/src/compressible/mint/state.rs`:
- Around line 22-46: The early-return duplicates available_balance/epochs_funded
logic and can diverge from get_last_funded_epoch (leading to inconsistent
results when epochs_funded == 0 and compression_info.last_claimed_slot > 0);
remove the local early-return that checks rent_per_epoch == 0 ||
available_balance / rent_per_epoch == 0 and instead always call
get_last_funded_epoch (using compression_info, account_size, lamports, config,
and rent_exemption) and return (last_funded_epoch + 1) * SLOTS_PER_EPOCH, or if
you prefer to keep a short-circuit, make the zero-epochs behavior match
get_last_funded_epoch by returning last_claimed_epoch.saturating_sub(1)
converted to slot before multiplying by SLOTS_PER_EPOCH (referencing
get_last_funded_epoch, available_balance, rent_per_epoch,
compression_info.last_claimed_slot, and SLOTS_PER_EPOCH).
In `@forester/tests/test_indexer_interface.rs`:
- Around line 41-44: The test imports from outdated paths in light_token_client;
update the imports so the symbols resolve (or add crate re-exports/feature
flags). Locate the test's use of create_compressible_token_account and
CreateCompressibleTokenAccountInputs and change their import path to the crate's
current module (or add pub use in light_token_client) and likewise ensure
create_mint_action_instruction, MintActionParams, and MintActionType are
imported from the correct instructions::mint_action module (or re-export them
from a top-level module); adjust the test imports to match the crate's API (or
add the necessary pub use re-exports / feature-gated exports in
light_token_client) so the compiler can find these symbols.
In `@sdk-libs/client/src/indexer/types.rs`:
- Around line 1167-1226: The convert_cold_context function duplicates the same
tree_info and data decoding across the Account/Token/Mint arms; extract that
logic into small helpers (e.g., decode_tree_info(tree_info:
&photon_api::models::TreeInfo) -> Result<InterfaceTreeInfo, IndexerError> and
decode_cold_data(data: &photon_api::models::ColdData) -> Result<ColdData,
IndexerError>) that reuse decode_base58_to_fixed_array, Pubkey::new_from_array,
base64::decode_config and map base64 errors to
IndexerError::InvalidResponseData, then call these helpers from each arm in
convert_cold_context to remove duplication while preserving existing return
types (ColdContext variants).
In
`@sdk-libs/photon-api/src/models/_get_multiple_account_interfaces_post_request_params.rs`:
- Around line 10-21: GetMultipleAccountInterfacesPostRequestParams currently
accepts any-length addresses in its new constructor despite the doc comment "max
100"; add a checked constructor or validation method that enforces
addresses.len() <= 100 and returns a Result (e.g., try_new or validate)
indicating an error when the limit is exceeded; update or add unit tests to
exercise the new behavior and ensure callers migrate to the fallible constructor
(keep the existing new as convenience if desired but document it doesn’t
validate).
In
`@sdk-libs/photon-api/src/models/_get_multiple_account_interfaces_post_request.rs`:
- Around line 27-33: The constructor
GetMultipleAccountInterfacesPostRequest::new currently hardcodes id ("test-id")
and jsonrpc ("2.0"); change new to accept a caller-supplied id (and optionally
jsonrpc) or add a builder-style API so callers provide id to avoid
collisions—update the GetMultipleAccountInterfacesPostRequest::new signature to
take an id: String (and/or jsonrpc: String) and set self.id = id, leaving method
and params assignment as-is, or implement a
GetMultipleAccountInterfacesPostRequest::builder() that lets callers set
id/jsonrpc before finalizing the struct.
In `@sdk-libs/photon-api/src/models/interface_result.rs`:
- Around line 12-25: The enum InterfaceResult derives Serialize and Deserialize
but the serde traits are not imported; add the serde trait imports so the
derives resolve by importing Serialize and Deserialize (e.g., add a use
serde::{Serialize, Deserialize}; near the top of the file) and keep the existing
#[derive(...)] on InterfaceResult unchanged so the Account/Token/Mint variants
compile correctly.
| /// GetMultipleAccountInterfacesPostRequestParams : Request parameters for getMultipleAccountInterfaces | ||
| #[derive(Clone, Default, Debug, PartialEq, Serialize, Deserialize)] | ||
| pub struct GetMultipleAccountInterfacesPostRequestParams { | ||
| /// List of account addresses to look up (max 100) | ||
| #[serde(rename = "addresses")] | ||
| pub addresses: Vec<String>, | ||
| } | ||
|
|
||
| impl GetMultipleAccountInterfacesPostRequestParams { | ||
| pub fn new(addresses: Vec<String>) -> Self { | ||
| Self { addresses } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Enforce the 100-address cap to prevent avoidable API errors.
The doc says “max 100,” but the constructor accepts any size. Consider adding a checked constructor (or a validate method) so callers can fail fast.
✅ Suggested addition (non-breaking helper)
impl GetMultipleAccountInterfacesPostRequestParams {
pub fn new(addresses: Vec<String>) -> Self {
Self { addresses }
}
+
+ pub fn new_checked(addresses: Vec<String>) -> Result<Self, String> {
+ if addresses.len() > 100 {
+ return Err(format!(
+ "addresses.len() must be <= 100, got {}",
+ addresses.len()
+ ));
+ }
+ Ok(Self { addresses })
+ }
}🤖 Prompt for AI Agents
In
`@sdk-libs/photon-api/src/models/_get_multiple_account_interfaces_post_request_params.rs`
around lines 10 - 21, GetMultipleAccountInterfacesPostRequestParams currently
accepts any-length addresses in its new constructor despite the doc comment "max
100"; add a checked constructor or validation method that enforces
addresses.len() <= 100 and returns a Result (e.g., try_new or validate)
indicating an error when the limit is exceeded; update or add unit tests to
exercise the new behavior and ensure callers migrate to the fallible constructor
(keep the existing new as convenience if desired but document it doesn’t
validate).
| impl GetMultipleAccountInterfacesPostRequest { | ||
| pub fn new(params: models::GetMultipleAccountInterfacesPostRequestParams) -> Self { | ||
| Self { | ||
| id: "test-id".to_string(), | ||
| jsonrpc: "2.0".to_string(), | ||
| method: "getMultipleAccountInterfaces".to_string(), | ||
| params: Box::new(params), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Prefer caller-supplied JSON-RPC id to avoid collisions.
Hardcoding "test-id" makes it easy to accidentally reuse IDs across requests. Consider accepting id (and optionally jsonrpc) in new() or adding a builder-style constructor.
♻️ Suggested refactor
- pub fn new(params: models::GetMultipleAccountInterfacesPostRequestParams) -> Self {
+ pub fn new(id: String, params: models::GetMultipleAccountInterfacesPostRequestParams) -> Self {
Self {
- id: "test-id".to_string(),
+ id,
jsonrpc: "2.0".to_string(),
method: "getMultipleAccountInterfaces".to_string(),
params: Box::new(params),
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| impl GetMultipleAccountInterfacesPostRequest { | |
| pub fn new(params: models::GetMultipleAccountInterfacesPostRequestParams) -> Self { | |
| Self { | |
| id: "test-id".to_string(), | |
| jsonrpc: "2.0".to_string(), | |
| method: "getMultipleAccountInterfaces".to_string(), | |
| params: Box::new(params), | |
| impl GetMultipleAccountInterfacesPostRequest { | |
| pub fn new(id: String, params: models::GetMultipleAccountInterfacesPostRequestParams) -> Self { | |
| Self { | |
| id, | |
| jsonrpc: "2.0".to_string(), | |
| method: "getMultipleAccountInterfaces".to_string(), | |
| params: Box::new(params), |
🤖 Prompt for AI Agents
In
`@sdk-libs/photon-api/src/models/_get_multiple_account_interfaces_post_request.rs`
around lines 27 - 33, The constructor
GetMultipleAccountInterfacesPostRequest::new currently hardcodes id ("test-id")
and jsonrpc ("2.0"); change new to accept a caller-supplied id (and optionally
jsonrpc) or add a builder-style API so callers provide id to avoid
collisions—update the GetMultipleAccountInterfacesPostRequest::new signature to
take an id: String (and/or jsonrpc: String) and set self.id = id, leaving method
and params assignment as-is, or implement a
GetMultipleAccountInterfacesPostRequest::builder() that lets callers set
id/jsonrpc before finalizing the struct.
QuestionCan we remove Context: The PR adds interface methods to |
Logic Review: Interface MethodsC1 (Option Return Types): Interface methods should return Current inconsistency:
Recommended: |
|
C2 (Owner Semantics Mismatch): Cold account construction uses different owner semantics:
Recommendation: Document owner field semantics per function. |
|
C3 (Silent First-Selection): Both result.value.items.into_iter().next()This silently takes the first result without validation. If the indexer returns multiple matches, only the first is used with no warning. Recommendation: Add validation or log warning if multiple results returned. |
Error Context Loss in Decode OperationsAll base58/base64 decoding failures (13+ points) collapse to generic Locations: Impact: Cannot determine which field failed or why. Fix: Use existing contextual error variants: // Instead of:
.map_err(|_| IndexerError::InvalidResponseData)?
// Use:
.map_err(|_| IndexerError::base58_decode_error("mint_pda", "invalid base58"))?Note: |
Unused Error Variants
// error.rs:23-30
#[error("Base58 decode error: {field} - {message}")]
Base58DecodeError { field: String, message: String },
#[error("Data decode error: {field} - {message}")]
DataDecodeError { field: String, message: String },
// error.rs:76-88 - helper methods
pub fn base58_decode_error(field: &str, message: &str) -> Self { ... }
pub fn decode_error(field: &str, message: &str) -> Self { ... }All conversions use generic |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 9
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
sdk-libs/client/src/indexer/base58.rs (1)
30-34: Pre-existing:from_byteswill panic on incorrect slice length.This isn't part of your PR changes, but worth flagging:
copy_from_slicepanics ifbytes.len() != 32. This creates an inconsistent API wherefrom_base58returns aResultbutfrom_bytescan panic.🛡️ Optional defensive fix
fn from_bytes(bytes: &[u8]) -> Result<Self, IndexerError> { + if bytes.len() != 32 { + return Err(IndexerError::base58_decode_error( + "bytes", + format!("expected 32 bytes, got {}", bytes.len()), + )); + } let mut arr = [0u8; 32]; arr.copy_from_slice(bytes); Ok(arr) }forester/src/compressible/mint/state.rs (1)
56-86: Avoid stale tracker entries when validation fails.If a tracked pubkey later becomes invalid (closed, reallocated, or not decompressed), the early returns leave stale state in
accounts. Consider removing the entry on validation failures to keep tracker state consistent.🔧 Suggested patch (apply pattern to all early-return branches)
- if account_data.len() <= ACCOUNT_TYPE_OFFSET { - debug!("Mint account {} too short, skipping", pubkey); - return Ok(()); - } + if account_data.len() <= ACCOUNT_TYPE_OFFSET { + debug!("Mint account {} too short, skipping", pubkey); + self.remove(&pubkey); + return Ok(()); + } - if account_data[ACCOUNT_TYPE_OFFSET] != ACCOUNT_TYPE_MINT { - debug!("Account {} is not a Mint, skipping", pubkey); - return Ok(()); - } + if account_data[ACCOUNT_TYPE_OFFSET] != ACCOUNT_TYPE_MINT { + debug!("Account {} is not a Mint, skipping", pubkey); + self.remove(&pubkey); + return Ok(()); + }
🤖 Fix all issues with AI agents
In `@forester/tests/test_indexer_interface.rs`:
- Around line 125-131: Replace fixed sleeps (calls to
sleep(Duration::from_secs(...))) with deterministic readiness checks by using
the existing wait_for_indexer helper consistently: remove the sleep calls and
before each step that needs the indexer, call
wait_for_indexer(&rpc).await.expect(...). If wait_for_indexer lacks a
configurable timeout or retry interval, extend it (e.g., add a timeout or
max_retries parameter) and use that enhanced wait_for_indexer in the tests so
they no longer rely on hard sleeps and become deterministic and faster.
- Around line 554-559: The test assertion is expecting a non-existent
ColdContext::Mint variant; update the assertion to check for
ColdContext::Account instead so it matches the actual enum and the conversion
implemented in From<MintInterface> for AccountInterface (see
decompress_mint.rs:135). Specifically, change the matches on
compressed_mint_interface.account.cold to assert it is Some(ColdContext::Account
{ .. }) and keep the existing failure message ("Cold context should be the Mint
variant") or update it to reflect "Account" if you prefer.
In `@sdk-libs/client/src/indexer/types.rs`:
- Around line 1118-1128: The current logic logs a warning when derived_pda (from
find_mint_address(mint_signer_pubkey)) doesn't equal self.mint_pda, but we must
fail fast; replace the warn! path in the block that computes mint_signer,
derived_pda, and bump so that the function returns an error (or otherwise
aborts) instead of continuing with inconsistent metadata — e.g., propagate a
Result::Err or use the crate’s common error/bail mechanism with a clear message
including derived_pda, self.mint_pda, and bump to ensure callers cannot receive
a mint with mismatched PDA/bump.
In `@sdk-libs/client/src/interface/decompress_mint.rs`:
- Around line 138-142: The conversion for MintState::None should not fabricate a
valid AccountInterface via Account::default(); update the mapping in
decompress_mint.rs so that when matching MintState::None you propagate absence
(e.g., set account to None by changing the Account field on the target type to
Option<AccountInterface>) instead of Account::default(), and adjust the related
types/From impls and any callers of the struct (e.g., the From<MintState> impl
and any uses of the resulting MintInterface/MintStruct) to handle
Option<AccountInterface>; alternatively, if changing the type is unacceptable,
add an explicit documented sentinel (e.g., account = None or account_is_missing
flag) and ensure callers are updated to check that sentinel rather than relying
on lamports == 0.
- Around line 102-145: The Cold mint arm in the From<MintInterface> for
AccountInterface impl is constructing ColdContext::Account(compressed) but tests
expect ColdContext::Mint { .. }; change the MintState::Cold branch to build
ColdContext::Mint and include the previously discarded mint_data (do not use
mint_data: _), populating the ColdContext::Mint fields from compressed and
mint_data so AccountInterface::cold holds ColdContext::Mint and account remains
constructed from compressed.lamports/data/owner as before.
In `@sdk-libs/client/src/rpc/client.rs`:
- Around line 440-559: convert_account_interface currently hardcodes
TreeInfo.queue = tree_info.tree and TreeType::StateV1 when building
CompressedAccount in all cold branches; change it to take the real values from
the indexer response (or map them) instead: use tree_info.queue (not
tree_info.tree), and use tree_info.tree_type (or derive via a mapping function)
when populating TreeInfo in the Account, Token and Mint branches inside
convert_account_interface so v2/address trees keep correct queue and TreeType.
- Around line 562-624: In convert_token_account_interface, the owner_override
passed to TokenAccountInterface::cold is wrong (you're passing
indexer_tai.account.key — the token account address); instead set owner_override
to the token's owner (the wallet owner for ATAs or the PDA for program-owned
accounts) by using the correct owner field from the indexer payload (e.g., the
owner from indexer_tai.token or fall back to indexer_tai.account.account.owner
as appropriate) and update the TokenAccountInterface::cold call accordingly;
also fix convert_account_interface where unwrap_or_default() silently discards
token data deserialization errors—replace that unwrap_or_default() with proper
Result/error propagation so deserialization failures return an RpcError rather
than a default value.
In `@sdk-libs/program-test/src/program_test/rpc.rs`:
- Around line 548-557: The current code returns Some(MintInterface { state:
MintState::None }) for a not-found mint which is inconsistent with
get_account_interface/get_token_account_interface; update the RPC handler that
constructs the Response to return value: None when the mint isn't found instead
of Some(MintInterface { state: MintState::None }), so callers receive
Result<Option<...>, RpcError> with None = not found (adjust any local variables
like address_tree/compressed_address usage accordingly and remove the
MintState::None construction).
- Around line 560-577: The current implementation of
get_multiple_account_interfaces iterates and calls get_account_interface per
address causing an N+1 query pattern; update it to use the Photon indexer's
batch endpoint by invoking get_multiple_account_interfaces_post (or equivalent
batch RPC) instead of looping so all addresses are fetched in one request;
modify the get_multiple_account_interfaces function to call the batch method,
handle the returned Vec<Option<AccountInterface>> and construct the Response
with Context { slot } as before, and keep fallback to the per-address loop only
if the batch endpoint is not available.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
🤖 Fix all issues with AI agents
In `@sdk-libs/program-test/src/indexer/test_indexer.rs`:
- Around line 1403-1424: The current find_compressed_account_by_onchain_pubkey
and find_multiple_compressed_accounts_by_onchain_pubkeys perform O(n) linear
scans over compressed_accounts; switch them to use the onchain_pubkey_index
HashMap for O(1) lookups by retrieving the stored index
(onchain_pubkey_index.get(onchain_pubkey) -> idx) and then returning
compressed_accounts.get(idx), and update
find_multiple_compressed_accounts_by_onchain_pubkeys to map each key through the
new lookup; alternatively, if maintaining onchain_pubkey_index is undesirable
for tests, remove the onchain_pubkey_index field and its usage and add a brief
comment documenting that linear search is acceptable for test workloads.
- Around line 110-112: The onchain_pubkey_index HashMap is never updated; either
remove it or maintain it on inserts/removals: when adding a compressed account
in process_v1_compressed_account and add_event_and_compressed_accounts, insert
mapping from the account's onchain_pubkey ([u8;32]) to its index (usize) into
onchain_pubkey_index; when removing/replacing compressed_accounts update or
remove the corresponding key; then change
find_compressed_account_by_onchain_pubkey to look up the index in
onchain_pubkey_index and return compressed_accounts[index] (fall back to a safe
linear scan if missing) and add/adjust tests to assert index consistency.
- Around line 1427-1438: find_token_account_by_onchain_pubkey currently does a
linear scan over token_compressed_accounts using
Self::extract_onchain_pubkey_from_data; either add an indexed lookup or document
the linear-scan choice: to index, maintain a HashMap<[u8;32],
TokenDataWithMerkleContext> (or map to indexes) alongside
token_compressed_accounts and populate it where compressed token accounts are
inserted/updated, then change find_token_account_by_onchain_pubkey to query that
map using the extracted onchain pubkey; alternatively, add a short doc comment
on the find_token_account_by_onchain_pubkey function stating the linear search
is intentional for test simplicity so reviewers know it’s not an omission.
- Around line 1383-1384: The onchain_pubkey_index HashMap is created empty in
new() but never maintained when accounts change; update
process_v1_compressed_account to insert mapping entries into
onchain_pubkey_index whenever an account is added/updated (use the account's
onchain pubkey as key and relevant interface lookup info as value) and remove
the corresponding entry when an account is nullified or deleted, ensuring any
branch that mutates accounts (updates, overwrites, or clears) performs the same
insert/remove bookkeeping so the index remains in sync with the accounts
vector/registry.
In `@sdk-libs/program-test/src/program_test/rpc.rs`:
- Around line 458-463: The comment highlights confusing owner semantics in
TokenAccountInterface::cold construction where owner is set to *address with a
misleading comment; update the cold-path to use the actual token account owner
(token_acc.token_data.owner) when available, or clearly document the differing
semantics between get_account_interface and get_token_account_interface so
callers aren't misled; locate the TokenAccountInterface::cold invocation (using
compressed_token and *address) and replace the owner argument with the token
owner field or add a concise comment explaining why *address is used and how
owner differs across get_account_interface/get_token_account_interface.
- Around line 396-397: The conversion error raised by the .try_into() call is
wrapped into a generic RpcError::CustomError losing field context; change the
map_err to include the failing field and use the contextual
IndexerError::DataDecodeError (or construct a message like
format!("DataDecodeError on <field_name>: {:?}", e)) and convert that into
RpcError (e.g., IndexerError::DataDecodeError { field: "<field_name>".into(),
source: Box::new(e) } then .into() or format into RpcError::CustomError) so the
error identifies which field failed—update the .try_into().map_err(|e|
RpcError::CustomError(...)) to include the field name and IndexerError variant
instead of the current generic message.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
🤖 Fix all issues with AI agents
In `@sdk-libs/client/src/indexer/types.rs`:
- Around line 434-439: The MissingResult error messages lack the tree
identifier; update the IndexerError::MissingResult construction so the message
includes the missing tree id (e.g., the key from value.merkle_trees[i]) when
QUEUE_TREE_MAPPING.get(...) returns None — do this for the occurrence using
QUEUE_TREE_MAPPING and value.merkle_trees[i] (and the other similar sites
flagged in the review), by formatting the message to include the tree id string
so logs show which tree key failed.
In `@sdk-libs/client/src/rpc/client.rs`:
- Around line 493-539: The code builds a cold token AccountInterface using the
program owner (indexer_ai.account.owner) which is incorrect; change the owner
argument passed to AccountInterface::cold_token to the token's wallet owner
(e.g., token_data.owner or compressed_token.token.owner) when creating the
AccountInterface in the IndexerColdContext::Token branch so parsed.owner
reflects the actual token owner; update the call to
AccountInterface::cold_token(indexer_ai.key, compressed_token, /* use token
owner here instead of indexer_ai.account.owner */) and ensure the owner value's
type matches the expected parameter.
- Around line 1317-1324: The code currently takes
resp.value.items.into_iter().next() and uses *owner as the
TokenAccountInterface.key which silently hides multiple matches and uses the
wrong key; change this to explicitly check resp.value.items length and error if
there are 0 or >1 matches, and when there is exactly one match extract the token
account's compressed account address (e.g., a field on token_acc like
compressed_address or similar) to use as the TokenAccountInterface key instead
of *owner, returning an error if that compressed address is missing; update the
call site that uses TokenAccountInterface::cold to accept/propagate a Result so
callers handle the new error case and keep LIGHT_TOKEN_PROGRAM_ID usage
unchanged.
In `@sdk-libs/program-test/src/indexer/test_indexer.rs`:
- Around line 1402-1437: Add a duplicate-match guard to the single-lookups so we
don't silently return the first hit: in
find_compressed_account_by_onchain_pubkey and
find_token_account_by_onchain_pubkey, after finding the first matching entry
(using the existing predicate that calls
Self::extract_onchain_pubkey_from_data), check whether a second match exists
(e.g., iterate with filter using the same predicate and take(2) to detect >1
without allocating) and emit a debug_assert! or a warning log if a duplicate is
found; keep the original return value but ensure the duplicate detection uses
the same helper (Self::extract_onchain_pubkey_from_data) so behavior stays
consistent.
In `@sdk-libs/program-test/src/program_test/rpc.rs`:
- Around line 496-505: The current code in rpc.rs blindly uses
items.into_iter().next() (e.g., when building the Response with
TokenAccountInterface::cold) which silently picks an arbitrary match when the
indexer returns multiple entries; update each of the three call sites (the
blocks that currently call items.into_iter().next() and return Ok(Response { ...
TokenAccountInterface::cold ... })) to first inspect the collection: if
items.is_empty() return the existing not-found behavior, if items.len() > 1
return an explicit Err (or at minimum log/warn and return an error) indicating
ambiguous/multiple indexer results, and only when len() == 1 proceed to use the
single item to construct the Response; ensure the error path uses the same error
type returned by the surrounding function so callers can handle it.
- Around line 592-614: The get_token_account_by_owner_mint path needs an
on-chain hot lookup before the indexer fallback: replicate the pattern used in
get_token_account_interface() and get_ata_interface() by first calling
self.context.get_account(owner). If that returns an account owned by the
expected token program and its data/mint matches the requested mint, construct
and return a TokenAccountInterface::hot (using the same field order as other hot
responses). Only if the on-chain check yields no matching token account should
you query self.indexer.get_compressed_token_accounts_by_owner(...) as the
existing fallback.
- Around line 640-651: The hot-mint branch in get_mint_interface currently
treats any non-zero-lamport account as a mint; update the check after
self.context.get_account(address) so it also verifies account.owner ==
LIGHT_TOKEN_PROGRAM_ID before returning the MintInterface with MintState::Hot {
account }; mirror the ownership validation used in get_token_account_interface
and get_ata_interface to ensure only accounts owned by LIGHT_TOKEN_PROGRAM_ID
are classified as hot mints.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@sdk-libs/client/src/rpc/client.rs`:
- Around line 1317-1324: The current use of resp.value.items.into_iter().next()
in the code that builds TokenAccountInterface::cold (using owner and
light_sdk::constants::LIGHT_TOKEN_PROGRAM_ID) silently ignores additional
matching accounts; change the logic to detect ambiguity by inspecting
resp.value.items.len(): if >1 return a descriptive error (or at minimum log a
warning) that multiple compressed token accounts were found for the given
owner/mint instead of selecting the first, and ensure the method signature and
callers handle/propagate this error (or documented warning) so callers are aware
of the ambiguity.
In `@sdk-libs/program-test/src/program_test/rpc.rs`:
- Around line 592-613: The lookup currently takes the first item from
result.value.items and uses *owner as the token key, which can misidentify
accounts and hide duplicates; change the logic in the indexer branch (where
get_compressed_token_accounts_by_owner is called and token_acc is bound) to: 1)
check result.value.items.len() and return an RpcError::CustomError (or
warn/fail-fast) if more than one matching token account is returned, and 2) use
the actual token account address from the returned item (token_acc's
address/public key field) when constructing TokenAccountInterface::cold instead
of using *owner as the key so the real account address is preserved. Ensure you
reference get_compressed_token_accounts_by_owner, result.value.items, token_acc,
and TokenAccountInterface::cold when making the change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
forester/src/compressible/mint/state.rs (1)
32-32: Off-by-one error in compressibility boundary—accounts become compressible one epoch too early.The formula
last_funded_epoch * SLOTS_PER_EPOCHmarks an account compressible during the last funded epoch itself. Sinceget_last_funded_epochreturns the last fully-paid epoch (e.g., epoch 2 means epochs 0–2 are covered), the compressibility threshold should be the first unpaid epoch:compressible_slot = (last_funded_epoch + 1) * SLOTS_PER_EPOCHThis ensures accounts only become compressible once they enter an epoch lacking rent coverage, aligning with the documented compressibility window: an account is Funded when it has rent for current + next epoch, and Compressible when it lacks that coverage.
Proposed fix
- Ok(last_funded_epoch * SLOTS_PER_EPOCH) + Ok((last_funded_epoch + 1) * SLOTS_PER_EPOCH)
…nt accounts refactor: streamline CI workflow in programs.yml and justfile
…ain permissions Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
Summary by CodeRabbit
New Features
Bug Fixes
Tests
Documentation
✏️ Tip: You can customize this high-level summary in your review settings.