Skip to content

Conversation

@Notoriousbrain
Copy link

@Notoriousbrain Notoriousbrain commented Aug 11, 2025

PR: Global Contributor Leaderboard (GitHub + GitLab)

Summary

This PR introduces the Global Contributor Leaderboard foundation (issue #142).
It stores and ranks contributions from GitHub and GitLab in a single unified table (contrib_rollups), synced into Redis for fast queries, with full API + UI integration.


Context & Motivation

To rank OSSdotnow users globally, we need:

  • Durable storage of contributions per user × time window
  • Fast read path for leaderboard queries
  • Scalable ingestion via provider APIs (GitHub/GitLab)

👉 Schema has been simplified compared to earlier designs:
We now store just two rolling periods per provider (last_30d, last_365d) in contrib_rollups.
This keeps storage lean and allows O(1) updates from daily snapshots.


Scope (Commit 1/n)

  • Add contrib_rollups table:
    • user_id + period (unique key)
    • Provider-agnostic contribution counts (commits, prs, issues, total)
    • Timestamps (fetched_at, updated_at)
  • Add GitHub provider client (githubProvider.ts):
    • Uses GraphQL API for commits/PRs/issues
    • Rollups for last 30d + last 365d
  • Add GitLab provider client (gitlabProvider.ts):
    • Uses REST events API
    • Derives commits/MRs/issues from public events
    • Rollups for last 30d + last 365d
  • Add Aggregator (refreshUserRollups.ts):
    • Fetches provider data → upserts into contrib_rollups
    • Supports both GitHub and GitLab
    • Batch helper with concurrency limits
  • Add Redis sync (leaderboard/redis.ts):
    • Pushes contrib_rollups.total into ZSETs:
      • lb:rollups:30d
      • lb:rollups:365d
    • Maintains lb:users set
  • Add API endpoints:
    • /api/leaderboard → JSON, paginated leaderboard
    • /api/leaderboard/details → per-user breakdown
    • /api/leaderboard/profiles → user metadata
    • /api/leaderboard/export → CSV export
    • /api/cron/daily → daily refresh job (auth with CRON_SECRET or x-vercel-cron)
  • Add UI (/leaderboard page + LeaderboardClient):
    • Paging, sorting, CSV export
    • Switch between 30d and 365d
    • Shows commits, PRs, issues, total
    • Displays GitHub/GitLab usernames + avatars

Implementation Details

  • DB:
    contrib_rollups holds snapshots for each user × period.
    Example rows:

    user_id period commits prs issues total fetched_at
    u1 last_30d 12 3 5 20
    u1 last_365d 80 30 12 122
  • Aggregator:
    Runs daily, calls providers, writes rollups, pushes to Redis.

  • Redis:
    ZSETs maintain leaderboard rank in O(log N).
    Reads (top N users) are instant.

  • UI:
    Fetches leaderboard from /api/leaderboard + details/profiles in parallel.
    Allows sorting by rank, user, total, commits, PRs, issues.

  • Cron Auth:
    /api/cron/daily requires Authorization: Bearer ${CRON_SECRET} or Vercel’s x-vercel-cron.


How to Test Locally

  1. Run migrations: bun db:migrate
  2. Set GITHUB_TOKEN, GITLAB_TOKEN, CRON_SECRET in .env
  3. Manually run cron:
    curl -H "Authorization: Bearer $CRON_SECRET" http://localhost:3000/api/cron/daily
  4. Visit /leaderboard and confirm:
    • Data shows up
    • Paging and CSV export work
    • Cron populates Redis

Deployment / Rollout

  • Safe to deploy: new table, new APIs, new UI
  • No impact on existing user/auth flows
  • Leaderboard will gradually populate once cron runs

Security / Privacy

  • Only public contribution counts are stored
  • Provider tokens used only in backend cron; never exposed
  • Cron routes protected by CRON_SECRET

Rollback Plan

  • Disable cron + Redis sync
  • Drop contrib_rollups table if necessary
  • Remove /leaderboard routes

Checklist

  • Migration applied locally
  • GitHub/GitLab providers tested
  • Redis sync verified
  • API routes tested manually
  • Leaderboard UI renders with seeded data
  • Cron endpoint dry-run tested

Related

Summary by CodeRabbit

  • New Features

    • Global Leaderboard page with sortable table, 30d/365d window selector, pagination, and CSV export.
    • Richer rows with usernames, avatars, and contribution breakdowns.
    • Self-enroll action to join the leaderboard.
  • APIs

    • Endpoints to fetch leaderboard pages, user profiles, and per-user metrics.
    • CSV export endpoint for leaderboard data.
  • Operations

    • Daily cron job to refresh leaderboard data and admin tools for bulk backfill.
  • Style

    • Refined focus rings and corner radii for inputs and selects.
  • Chores

    • Added Vercel cron schedule entry.

@coderabbitai
Copy link

coderabbitai bot commented Aug 11, 2025

✨ Finishing Touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore or @coderabbit ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@vercel
Copy link

vercel bot commented Aug 11, 2025

@Notoriousbrain is attempting to deploy a commit to the ossdotnow Team on Vercel.

A member of the Team first needs to authorize it.

} finally {
setLoading(false);
}
}, []);
Copy link

@vercel vercel bot Aug 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
}, []);
}, [provider]);

The doFetch callback is missing provider in its dependency array, causing stale closures when the provider selection changes.

View Details

Analysis

The doFetch function on line 107 uses the provider state variable (line 111: await fetchTop(w, provider, lim, cur)), but the useCallback dependency array on line 140 is empty []. This creates a stale closure where the callback captures the initial value of provider ('combined') and never updates when the user selects a different provider option.

When users change the provider selection using the dropdown (which calls setProvider), the component will continue making API calls with the old provider value, showing incorrect leaderboard data. This breaks a core feature of the leaderboard interface.


Recommendation

Add provider to the dependency array of the useCallback on line 140:

}, [provider]);

This will ensure the callback is recreated whenever the provider state changes, capturing the current provider value for API calls.

Comment on lines 119 to 120
fromDayUtc: yesterday,
toDayUtc: today,
Copy link

@vercel vercel bot Aug 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The daily cron job incorrectly includes the current day (today) in its refresh range, which could lead to processing incomplete contribution data.

View Details
📝 Patch Details
diff --git a/apps/web/app/api/internal/leaderboard/cron/daily/route.ts b/apps/web/app/api/internal/leaderboard/cron/daily/route.ts
index 92c86f9..fc12fa4 100644
--- a/apps/web/app/api/internal/leaderboard/cron/daily/route.ts
+++ b/apps/web/app/api/internal/leaderboard/cron/daily/route.ts
@@ -62,7 +62,7 @@ export async function GET(req: NextRequest) {
         processed: 0,
         skipped: 0,
         errors: [],
-        window: { from: yesterday.toISOString().slice(0, 10), to: today.toISOString().slice(0, 10) },
+        window: { from: yesterday.toISOString().slice(0, 10), to: yesterday.toISOString().slice(0, 10) },
         note: `No user IDs in Redis set "${USER_SET}". Run a backfill/refresh to seed it.`,
       });
     }
@@ -87,7 +87,7 @@ export async function GET(req: NextRequest) {
         dryRun: true,
         scanned: userIds.length,
         sample: preview.slice(0, 10),
-        window: { from: yesterday.toISOString().slice(0, 10), to: today.toISOString().slice(0, 10) },
+        window: { from: yesterday.toISOString().slice(0, 10), to: yesterday.toISOString().slice(0, 10) },
       });
     }
 
@@ -117,7 +117,7 @@ export async function GET(req: NextRequest) {
               githubLogin,
               gitlabUsername,
               fromDayUtc: yesterday,
-              toDayUtc: today,
+              toDayUtc: yesterday,
               githubToken: env.GITHUB_TOKEN,            // may be undefined ⇒ GH skipped
               gitlabToken: env.GITLAB_TOKEN,            // optional
               gitlabBaseUrl: env.GITLAB_ISSUER || "https://gitlab.com",
@@ -141,7 +141,7 @@ export async function GET(req: NextRequest) {
       processed,
       skipped,
       errors,
-      window: { from: yesterday.toISOString().slice(0, 10), to: today.toISOString().slice(0, 10) },
+      window: { from: yesterday.toISOString().slice(0, 10), to: yesterday.toISOString().slice(0, 10) },
     });
   } catch (err: unknown) {
     // Surface the actual error during development

Analysis

The daily cron job is designed to refresh contribution data for completed days, but it's passing a date range that includes both yesterday and today to the refreshUserDayRange function. Looking at lines 119-120, the function call passes fromDayUtc: yesterday, toDayUtc: today.

However, examining the refreshUserDayRange implementation shows that toDayUtc is treated as inclusive (line 240: toInclusive and line 247: d.getTime() <= toInclusive.getTime()). This means the cron job processes both yesterday's complete data AND today's partial/incomplete data.

For a daily cron job, this is problematic because:

  1. Today's contribution data is likely incomplete when the cron runs
  2. This could lead to inconsistent leaderboard data where today's partial counts are recorded
  3. The next day's cron run would overwrite today's data with complete data, causing fluctuations

A daily cron should typically only process the previous day's complete data.


Recommendation

Change the cron job to only process yesterday's data by passing yesterday for both fromDayUtc and toDayUtc, or adjust the range to exclude today:

await refreshUserDayRange(
  { db },
  {
    userId,
    githubLogin,
    gitlabUsername,
    fromDayUtc: yesterday,
    toDayUtc: yesterday,  // Only process yesterday
    githubToken: env.GITHUB_TOKEN,
    gitlabToken: env.GITLAB_TOKEN,
    gitlabBaseUrl: env.GITLAB_ISSUER || "https://gitlab.com",
    concurrency: workers,
  },
);

This ensures the daily cron only processes complete contribution data from the previous day.

@Notoriousbrain Notoriousbrain changed the title feat(db): add contributor aggregation tables (Commit 1/n for Global Leaderboard #142) feat(leaderboard): add global contributor leaderboard based on total github/gitlab contributions Aug 30, 2025
@Notoriousbrain Notoriousbrain marked this pull request as ready for review August 30, 2025 19:28
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 34

🧹 Nitpick comments (49)
packages/ui/src/components/select.tsx (1)

113-113: Typo: Tailwind class outline-hidden is not valid.

Use outline-none or remove it.

Apply:

- "… text-sm outline-hidden select-none …"
+ "… text-sm outline-none select-none …"
packages/db/drizzle/meta/0022_snapshot.json (2)

1958-1978: Guard data quality: enforce non-negative counts.

Prevent bad writes with a CHECK constraint.

Drizzle:

check('contrib_daily_counts_non_negative', sql`commits >= 0 AND prs >= 0 AND issues >= 0`)

1986-1992: Auto-update updated_at on modification.

DEFAULT now() doesn’t update on UPDATE. Consider a trigger to keep it current.

SQL:

create or replace function set_updated_at() returns trigger as $$
begin new.updated_at = now(); return new; end; $$ language plpgsql;

create trigger trg_contrib_daily_updated
before update on contrib_daily
for each row execute function set_updated_at();

(Repeat for contrib_totals.)

packages/db/drizzle/0022_faulty_gauntlet.sql (1)

2-26: Consider referential integrity to users table (if available)

A FK on user_id prevents orphan rows and improves planner stats. If your users table is public.users(id), add:

ALTER TABLE "contrib_daily"
  ADD CONSTRAINT contrib_daily_user_fk
  FOREIGN KEY ("user_id") REFERENCES "public"."users"("id") ON DELETE CASCADE;

ALTER TABLE "contrib_totals"
  ADD CONSTRAINT contrib_totals_user_fk
  FOREIGN KEY ("user_id") REFERENCES "public"."users"("id") ON DELETE CASCADE;

Verify actual users table name/schema before applying.

packages/api/src/providers/gitlab.ts (5)

48-53: Simplify startOfUtcDay implementation

Minor cleanup; avoids redundant ternary.

-function startOfUtcDay(d: DateLike): Date {
-  const date = d instanceof Date ? new Date(d) : new Date(d);
+function startOfUtcDay(d: DateLike): Date {
+  const date = d instanceof Date ? new Date(d) : new Date(d);
   return new Date(
     Date.UTC(date.getUTCFullYear(), date.getUTCMonth(), date.getUTCDate(), 0, 0, 0, 0),
   );
 }

(Note: current code already works; this is purely for clarity.)


78-83: Send one GitLab auth header, not both

Pick either PRIVATE-TOKEN (personal/project tokens) or Authorization: Bearer (OAuth/JWT). Sending both can confuse proxies and complicate debugging.

-  if (token) {
-    headers['PRIVATE-TOKEN'] = token;
-    headers['Authorization'] = `Bearer ${token}`;
-  }
+  if (token) {
+    headers['PRIVATE-TOKEN'] = token; // or: headers['Authorization'] = `Bearer ${token}`;
+  }

90-95: Retry window shares a single 15s timeout

The 429 wait plus second fetch can exceed 15s and get aborted mid-flight. Use per-attempt timeouts or a longer overall budget.

-  const controller = new AbortController();
-  const to = setTimeout(() => controller.abort(), 15_000);
+  const makeController = () => {
+    const c = new AbortController();
+    const t = setTimeout(() => c.abort(), 15_000);
+    return { c, t };
+  };
   try {
-    let res = await fetch(u.toString(), { headers, signal: controller.signal });
+    let { c, t } = makeController();
+    let res = await fetch(u.toString(), { headers, signal: c.signal });
     if (res.status === 429) {
       const retryAfter = Number(res.headers.get('Retry-After') || 1);
       const waitSec = Math.min(Math.max(retryAfter, 1), 10);
       await sleep(waitSec * 1000);
-      res = await fetch(u.toString(), { headers, signal: controller.signal });
+      clearTimeout(t);
+      ({ c, t } = makeController());
+      res = await fetch(u.toString(), { headers, signal: c.signal });
     }
     …
   } finally {
-    clearTimeout(to);
+    clearTimeout(to as any);
   }

(Adjust variable names accordingly.)


230-237: Parallelize project visibility lookups with bounded concurrency

The current per-project await is O(N) round trips; with many projects this is slow. Batch with Promise.all and keep your simple cache.

-  const out: z.infer<typeof GitlabEventSchema>[] = [];
-  for (const [pid, list] of byProject) {
-    const vis = await getProjectVisibility(baseUrl, pid, token);
-    if (vis === 'public') out.push(...list);
-  }
-  return out;
+  const entries = [...byProject.entries()];
+  const results = await Promise.all(
+    entries.map(async ([pid, list]) => ({
+      list,
+      vis: await getProjectVisibility(baseUrl, pid, token),
+    })),
+  );
+  const out: z.infer<typeof GitlabEventSchema>[] = [];
+  for (const r of results) if (r.vis === 'public') out.push(...r.list);
+  return out;

356-366: Max pages may truncate active users’ history

For heavy contributors, 25 pages at 100 events/page might under-scan a full year. Confirm with real accounts and tune maxPages or early-exit heuristics.

packages/api/src/providers/github.ts (1)

70-89: Optional: add request timeout/abort for GitHub GraphQL

Mirror the GitLab 15s timeout to avoid hung HTTP calls.

scripts/github-test.ts (1)

15-32: Optional: add --help and examples

Small CLI ergonomics boost; prints usage when args invalid or --help is passed.

packages/db/src/schema/index.ts (1)

1-15: Deduplicate double export of './project-claims'

Not harmful, but noisy.

 export * from './project-claims';
 …
-export * from './project-claims';
packages/api/src/utils/rate-limit.ts (1)

12-13: Prefix by environment to avoid cross-env key collisions.

If Redis is shared across dev/staging/prod, include the env in the ratelimit prefix.

-      prefix: `ratelimit:${prefix}`,
+      prefix: `ratelimit:${process.env.VERCEL_ENV ?? process.env.NODE_ENV ?? 'dev'}:${prefix}`,
packages/env/src/verifyCron.ts (1)

1-9: Harden header parsing and comparison (case-insensitive “Bearer”, timing-safe equality).

Small robustness/security wins: accept “bearer” (RFC allows case-insensitive scheme) and compare tokens in constant time.

-import { env } from './server';
+import { env } from './server';
+import { timingSafeEqual } from 'node:crypto';

 export function isCronAuthorized(headerValue: string | null | undefined) {
   if (!headerValue) return false;
-  const token = headerValue.startsWith('Bearer ')
-    ? headerValue.slice('Bearer '.length).trim()
-    : headerValue.trim();
-  return token.length > 0 && token === env.CRON_SECRET;
+  const match = /^Bearer\s+(.+)$/i.exec(headerValue);
+  const raw = (match ? match[1] : headerValue).trim();
+  if (!raw || !env.CRON_SECRET) return false;
+  const a = Buffer.from(raw);
+  const b = Buffer.from(env.CRON_SECRET);
+  return a.length === b.length && timingSafeEqual(a, b);
 }
packages/api/src/utils/cache.ts (1)

41-47: Allow invalidation for custom prefixes set at write-time.

invalidateCache always scopes to the default prefix, but getCached allows a custom prefix. Add an optional prefix to invalidation for consistency.

-export async function invalidateCache(pattern: string): Promise<void> {
+export async function invalidateCache(
+  pattern: string,
+  options: { prefix?: string } = {},
+): Promise<void> {
   if (!pattern || typeof pattern !== 'string') {
     throw new Error('Pattern must be a non-empty string');
   }
   try {
-    const fullPattern = `${CACHE_PREFIX}${pattern}*`;
+    const fullPattern = `${options.prefix ?? CACHE_PREFIX}${pattern}*`;
packages/api/src/redis/client.ts (1)

4-7: Guard env and fail fast with a clear error

Avoid silent misconfig; throw if URL/token are missing before constructing the client.

 export const redis = new Redis({
-  url: env.UPSTASH_REDIS_REST_URL,
-  token: env.UPSTASH_REDIS_REST_TOKEN,
+  // Fail-fast if misconfigured
+  url: (() => {
+    if (!env.UPSTASH_REDIS_REST_URL) throw new Error('UPSTASH_REDIS_REST_URL is not set');
+    return env.UPSTASH_REDIS_REST_URL;
+  })(),
+  token: (() => {
+    if (!env.UPSTASH_REDIS_REST_TOKEN) throw new Error('UPSTASH_REDIS_REST_TOKEN is not set');
+    return env.UPSTASH_REDIS_REST_TOKEN;
+  })(),
 });
apps/web/app/api/leaderboard/profiles/route.ts (2)

16-21: Return structured validation errors (avoid leaking parser internals in plain text)

Respond JSON with flattened errors; keeps API consistent with the success shape.

-  if (!parsed.success) {
-    return new Response(`Bad Request: ${parsed.error.message}`, { status: 400 });
-  }
+  if (!parsed.success) {
+    const err = parsed.error.flatten();
+    return Response.json({ ok: false, error: 'Invalid body', details: err }, { status: 400 });
+  }

20-21: Optional: de-duplicate userIds to shrink Redis round-trips

If duplicates aren’t meaningful to callers, dedupe before fetching to reduce pipeline size.

apps/web/app/(public)/leaderboard/page.tsx (2)

6-6: Type searchParams as a plain object (no Promise)

Next.js passes a plain object; current Promise type needlessly complicates usage (though await on a value works).

-type SearchParams = Promise<Record<string, string | string[] | undefined>>;
+type SearchParams = Record<string, string | string[] | undefined>;

14-18: Simplify: remove unnecessary await

After correcting the type, read the object directly.

-export default async function LeaderboardPage({ searchParams }: { searchParams?: SearchParams }) {
-  const sp = await searchParams;
+export default async function LeaderboardPage({ searchParams }: { searchParams?: SearchParams }) {
+  const sp = searchParams;
apps/web/app/api/leaderboard/route.ts (2)

11-15: Align default window with UI (optional)

The page defaults to 365d; consider matching here for consistency.

-  window: z.enum(HAS_ALL_TIME ? (["all", "30d", "365d"] as const) : (["30d", "365d"] as const))
-    .default("30d"),
+  window: z.enum(HAS_ALL_TIME ? (["all", "30d", "365d"] as const) : (["30d", "365d"] as const))
+    .default("365d"),

1-3: Make non-cache semantics explicit

Dynamic is sufficient, but adding revalidate=0 documents the intent for readers/tools.

 export const runtime = "nodejs";
 export const dynamic = "force-dynamic";
+export const revalidate = 0;
packages/db/drizzle/meta/0023_snapshot.json (1)

1987-1993: updated_at default now() isn’t an auto-update-on-write

Postgres DEFAULT only applies on INSERT. If you rely on updated_at for freshness, add a trigger or update it in write paths.

scripts/lb-seed-all-users.ts (3)

12-13: Remove unused META key and hgetall side-effect

META() and the hgetall read are unused and add unnecessary network I/O.

-const META = (id: string) => `lb:user:${id}`;

73-90: Make batching robust and cheaper on Redis

Use Promise.allSettled per batch to avoid failing the whole batch on one error and pipeline sadd.

-      await Promise.all(
-        chunk.map(async (id) => {
-          const username = usernameByUserId.get(id);
-          await setUserMeta(id, { username }, { seedLeaderboards: false });
-          await redis.sadd(USER_SET, id);
-        }),
-      );
+      const pipe = redis.pipeline();
+      await Promise.allSettled(
+        chunk.map(async (id) => {
+          const username = usernameByUserId.get(id);
+          await setUserMeta(id, { username }, { seedLeaderboards: false });
+          pipe.sadd(USER_SET, id);
+        }),
+      );
+      await pipe.exec();

19-28: Pass schema to drizzle for typed queries

Providing schema improves safety and DX.

-  const db = drizzle(pg);
+  const { schema } = await import('../packages/db/src/schema');
+  const db = drizzle(pg, { schema });
packages/auth/src/leaderboard-hooks.ts (3)

5-8: Double-syncing leaderboards

setUserMeta with seedLeaderboards: true already calls syncUserLeaderboards. Your extra call duplicates work.

 export async function onUserCreated(userId: string) {
-  await setUserMeta(userId, {}, { seedLeaderboards: true });
-  await syncUserLeaderboards(db, userId);
+  await setUserMeta(userId, {}, { seedLeaderboards: true });
 }

17-27: Same duplication on GitHub link path

Remove the second sync or set seedLeaderboards: false and call sync once.

-    if (login) {
-      await setUserMeta(userId, { githubLogin: login }, { seedLeaderboards: true });
-      await syncUserLeaderboards(db, userId);
-    }
+    if (login) {
+      await setUserMeta(userId, { githubLogin: login }, { seedLeaderboards: true });
+    }

29-39: Same duplication on GitLab link path

Mirror the GitHub change.

-    if (username) {
-      await setUserMeta(userId, { gitlabUsername: username }, { seedLeaderboards: true });
-      await syncUserLeaderboards(db, userId);
-    }
+    if (username) {
+      await setUserMeta(userId, { gitlabUsername: username }, { seedLeaderboards: true });
+    }
packages/api/src/leaderboard/meta.ts (1)

30-39: LGTM; consider pipelining hset+sadd

Functionality is correct. Minor optimization: send hset and sadd in a single pipeline roundtrip.

-  if (Object.keys(updates).length > 0) {
-    await redis.hset(`lb:user:${userId}`, updates);
-  }
-  await redis.sadd('lb:users', userId);
+  const pipe = redis.pipeline();
+  if (Object.keys(updates).length > 0) {
+    pipe.hset(`lb:user:${userId}`, updates);
+  }
+  pipe.sadd('lb:users', userId);
+  await pipe.exec();
apps/web/app/api/leaderboard/details/route.ts (3)

12-25: Window typing is wider than schema allows

You accept 'all' in the type but reject it at validation unless HAS_ALL_TIME is true. That cast widens types unnecessarily.

-type WindowKey = '30d' | '365d' | 'all';
+type WindowKey = (typeof HAS_ALL_TIME) extends true ? '30d' | '365d' | 'all' : '30d' | '365d';

44-51: inArray with up to 2000 values is OK but close to heavy queries

If userIds can approach 2k frequently, consider chunking very large lists or using a temporary table (UNNEST join) on the server side to keep plans stable.


81-85: Error message leakage

Returning raw error messages can leak internals. Return a generic message and log details.

-    const msg = String(err instanceof Error ? err.message : err);
-    console.error('[leaderboard/details]', err);
-    return new Response(`Internal Error: ${msg}`, { status: 500 });
+    console.error('[leaderboard/details]', err);
+    return new Response('Internal Server Error', { status: 500 });
packages/db/src/schema/contributions.ts (4)

11-12: Unused enum suggests missing provider dimension (or dead code).

You define contribProvider but don’t use it and contrib_rollups has no provider column. If provider-scoped rollups are required (per PR goals), add a provider column keyed by this enum; otherwise remove the enum to avoid drift.


39-43: Prefer a composite primary key and an order-friendly index for fast reads.

Current design uses only a unique index. A PK on (user_id, period) simplifies upserts and enforces identity. Also add (period, total) to accelerate ORDER BY total DESC queries.

Apply this diff:

@@
-import {
+import {
   pgEnum,
   pgTable,
   text,
   integer,
   timestamp,
   uniqueIndex,
   index,
+  primaryKey,
 } from "drizzle-orm/pg-core";
@@
   },
   (t) => [
-    uniqueIndex("contrib_rollups_user_period_uidx").on(t.userId, t.period),
+    primaryKey({ columns: [t.userId, t.period], name: "contrib_rollups_pkey" }),
     index("contrib_rollups_period_idx").on(t.period),
+    index("contrib_rollups_period_total_idx").on(t.period, t.total),
     index("contrib_rollups_user_idx").on(t.userId),
   ],
 );

Also applies to: 1-9


31-36: updatedAt won’t auto-bump on UPDATE.

Drizzle’s defaultNow doesn’t update on subsequent writes. Ensure your write path sets updated_at or add a DB trigger.


13-17: Period includes all_time but read path doesn’t expose it.

If you intend to support “all time” leaderboards, add it to the reader API and Redis keys; otherwise drop it from the enum to avoid confusion.

packages/auth/src/server.ts (2)

15-18: Origin builder may produce http for preview/staging.

For Vercel preview, this builds http://… which can fail due to mixed content or routing. Consider deriving from a request host, adding VERCEL_URL, or forcing https when env !== 'development'.


41-45: Narrow providerId to a literal union.

Typing as 'github' | 'gitlab' avoids silent typos.

Apply this diff:

-            providerId?: string; // 'github' | 'gitlab'
+            providerId?: 'github' | 'gitlab';
packages/api/src/leaderboard/read.ts (2)

90-93: Align max page size with export (200) to reduce round-trips.

Export calls with up to 200, but this clamps at 100. Bump to 200.

Apply this diff:

-  const limit = Math.min(Math.max(opts.limit, 1), 100);
+  const limit = Math.min(Math.max(opts.limit, 1), 200);

68-79: Consider keyset pagination for large offsets.

.offset(start) can degrade for deep pages. If you expect large leaderboards, consider seek pagination using (total DESC, user_id) as cursor keys.

apps/web/app/api/leaderboard/export/route.ts (1)

12-17: Provider query param is parsed but unused.

Either use it to scope the export or remove it from the schema to prevent confusion.

apps/web/app/api/leaderboard/dev-backfill/route.ts (1)

20-21: Forward the validated “days” parameter to the internal endpoint (or drop it).

days is validated but unused. Either pass it through or remove it to avoid confusion.

-  const { users, concurrency } = parsed.data;
+  const { users, concurrency, days } = parsed.data;-          body: JSON.stringify({
+          body: JSON.stringify({
             userId: u.userId,
             githubLogin: u.githubLogin,
             gitlabUsername: u.gitlabUsername,
+            days,
           }),

Also applies to: 37-41, 62-66

scripts/lb-backfill-365-all.ts (1)

45-50: Origin detection nit.

For non-production, prefer VERCEL_URL (or explicit --origin) over PROJECT_PRODUCTION_URL to avoid pointing at prod accidentally. Not blocking.

-    (process.env.VERCEL_ENV === 'production'
-      ? `https://${process.env.VERCEL_PROJECT_PRODUCTION_URL}`
-      : `http://${process.env.VERCEL_PROJECT_PRODUCTION_URL || 'localhost:3000'}`);
+    (process.env.VERCEL_ENV === 'production'
+      ? `https://${process.env.VERCEL_PROJECT_PRODUCTION_URL}`
+      : `https://${process.env.VERCEL_URL || 'localhost:3000'}`);
packages/api/src/leaderboard/useMeta.ts (1)

38-46: Handle Upstash pipeline return shapes when reading HGETALL.

Pipeline exec results can be plain objects or [err, value] tuples; normalize before mapping.

-  const rows = await pipe.exec();
-
-  return rows.map((raw, i) => {
+  const rows = await pipe.exec();
+  function asObj(v: unknown): Record<string, string | undefined> {
+    const val = Array.isArray(v) ? (v[1] as any) : v;
+    return (val && typeof val === 'object' ? val : {}) as Record<string, string | undefined>;
+  }
+  return rows.map((raw, i) => {
     const id = userIds[i]!;
-    const m = (raw || {}) as Record<string, string | undefined>;
+    const m = asObj(raw);
packages/api/src/leaderboard/redis.ts (1)

15-18: Minor: avoid duplicate SADD.

You add the same member to USER_SET twice; the first one is redundant. Not critical.

-export async function syncUserLeaderboards(db: DB, userId: string): Promise<void> {
-  await redis.sadd(USER_SET, userId);
+export async function syncUserLeaderboards(db: DB, userId: string): Promise<void> {-  pipe.sadd(USER_SET, userId);
+  pipe.sadd(USER_SET, userId);
   await pipe.exec();
 }

Also applies to: 37-39

apps/web/components/leaderboard/leaderboard-client.tsx (1)

257-265: Refresh button should reset pagination state to page 1

You fetch with cursor=0 but keep the old cursor in state, so the Rank column can mismatch the fetched data. Reset cursor in state and let the effect refetch.

Apply:

-            <Button
-              variant="secondary"
-              onClick={() => doFetch(window, limit, 0)}
+            <Button
+              variant="secondary"
+              onClick={() => {
+                setCursor(0); // effect will refetch with cursor=0
+              }}
apps/web/app/api/internal/leaderboard/backfill/route.ts (2)

24-35: Remove or wire up unused body fields (days, concurrency)

They are validated but unused, which is confusing for clients and future maintainers.

Apply (remove for now):

-    days: z.number().int().min(1).max(365).optional(),
-    concurrency: z.number().int().min(1).max(8).optional(),

130-138: Clarify 409 message when generic LOCK_CONFLICT occurs

withLock throws 'LOCK_CONFLICT' (no provider suffix). Your message currently prints 'unknown'. Include providers in that case for a clearer response.

Apply:

-    if (msg.startsWith('LOCK_CONFLICT')) {
-      const p = msg.split(':')[1] || 'unknown';
-      return new Response(`Conflict: backfill already running for ${p}`, { status: 409 });
-    }
+    if (msg.startsWith('LOCK_CONFLICT')) {
+      const p = msg.includes(':') ? msg.split(':')[1] : providers.join('+');
+      return new Response(`Conflict: backfill already running for ${p}`, { status: 409 });
+    }
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between f4e59bc and 466e3e2.

📒 Files selected for processing (39)
  • apps/web/app/(public)/leaderboard/page.tsx (1 hunks)
  • apps/web/app/api/internal/leaderboard/backfill/route.ts (1 hunks)
  • apps/web/app/api/internal/leaderboard/cron/daily/route.ts (1 hunks)
  • apps/web/app/api/leaderboard/details/route.ts (1 hunks)
  • apps/web/app/api/leaderboard/dev-backfill/route.ts (1 hunks)
  • apps/web/app/api/leaderboard/enroll/route.ts (1 hunks)
  • apps/web/app/api/leaderboard/export/route.ts (1 hunks)
  • apps/web/app/api/leaderboard/profiles/route.ts (1 hunks)
  • apps/web/app/api/leaderboard/route.ts (1 hunks)
  • apps/web/components/leaderboard/leaderboard-client.tsx (1 hunks)
  • packages/api/package.json (1 hunks)
  • packages/api/src/leaderboard/aggregator.ts (1 hunks)
  • packages/api/src/leaderboard/meta.ts (1 hunks)
  • packages/api/src/leaderboard/read.ts (1 hunks)
  • packages/api/src/leaderboard/redis.ts (1 hunks)
  • packages/api/src/leaderboard/useMeta.ts (1 hunks)
  • packages/api/src/providers/github.ts (1 hunks)
  • packages/api/src/providers/gitlab.ts (1 hunks)
  • packages/api/src/redis/client.ts (1 hunks)
  • packages/api/src/redis/lock.ts (1 hunks)
  • packages/api/src/utils/cache.ts (1 hunks)
  • packages/api/src/utils/rate-limit.ts (1 hunks)
  • packages/auth/src/leaderboard-hooks.ts (1 hunks)
  • packages/auth/src/server.ts (2 hunks)
  • packages/db/drizzle/0022_faulty_gauntlet.sql (1 hunks)
  • packages/db/drizzle/0023_bent_strong_guy.sql (1 hunks)
  • packages/db/drizzle/meta/0022_snapshot.json (1 hunks)
  • packages/db/drizzle/meta/0023_snapshot.json (1 hunks)
  • packages/db/drizzle/meta/_journal.json (1 hunks)
  • packages/db/src/schema/contributions.ts (1 hunks)
  • packages/db/src/schema/index.ts (1 hunks)
  • packages/env/package.json (1 hunks)
  • packages/env/src/verifyCron.ts (1 hunks)
  • packages/ui/src/components/input.tsx (1 hunks)
  • packages/ui/src/components/select.tsx (3 hunks)
  • scripts/github-test.ts (1 hunks)
  • scripts/lb-backfill-365-all.ts (1 hunks)
  • scripts/lb-seed-all-users.ts (1 hunks)
  • vercel.json (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (23)
apps/web/app/api/leaderboard/profiles/route.ts (1)
packages/api/src/leaderboard/useMeta.ts (1)
  • getUserMetas (38-60)
scripts/github-test.ts (1)
packages/api/src/providers/github.ts (1)
  • getGithubContributionTotals (108-143)
apps/web/app/api/internal/leaderboard/cron/daily/route.ts (5)
packages/env/src/verifyCron.ts (1)
  • isCronAuthorized (3-9)
packages/api/src/redis/client.ts (1)
  • redis (4-7)
packages/api/src/leaderboard/aggregator.ts (1)
  • refreshUserRollups (65-169)
packages/api/src/leaderboard/redis.ts (1)
  • syncUserLeaderboards (15-39)
packages/db/src/index.ts (1)
  • db (16-22)
packages/api/src/leaderboard/redis.ts (3)
packages/db/src/index.ts (2)
  • db (16-22)
  • DB (23-23)
packages/api/src/redis/client.ts (1)
  • redis (4-7)
packages/db/src/schema/contributions.ts (1)
  • contribRollups (19-43)
apps/web/components/leaderboard/leaderboard-client.tsx (1)
packages/api/src/leaderboard/read.ts (1)
  • LeaderRow (21-21)
packages/api/src/leaderboard/meta.ts (3)
packages/api/src/redis/client.ts (1)
  • redis (4-7)
packages/api/src/leaderboard/redis.ts (1)
  • syncUserLeaderboards (15-39)
packages/db/src/index.ts (1)
  • db (16-22)
scripts/lb-backfill-365-all.ts (1)
packages/api/src/redis/client.ts (1)
  • redis (4-7)
apps/web/app/api/leaderboard/enroll/route.ts (5)
packages/auth/src/server.ts (1)
  • auth (20-183)
packages/db/src/schema/auth.ts (1)
  • user (4-23)
packages/api/src/leaderboard/meta.ts (1)
  • setUserMeta (11-39)
packages/api/src/leaderboard/redis.ts (1)
  • syncUserLeaderboards (15-39)
packages/db/src/index.ts (1)
  • db (16-22)
packages/api/src/leaderboard/useMeta.ts (1)
packages/api/src/redis/client.ts (1)
  • redis (4-7)
apps/web/app/(public)/leaderboard/page.tsx (1)
apps/web/components/leaderboard/leaderboard-client.tsx (1)
  • LeaderboardClient (97-401)
apps/web/app/api/leaderboard/route.ts (2)
packages/api/src/leaderboard/read.ts (1)
  • getLeaderboardPage (82-103)
packages/db/src/index.ts (1)
  • db (16-22)
packages/api/src/providers/github.ts (1)
packages/api/src/providers/gitlab.ts (2)
  • DateLike (3-3)
  • DateRange (4-4)
packages/api/src/leaderboard/aggregator.ts (4)
packages/db/src/index.ts (2)
  • DB (23-23)
  • db (16-22)
packages/db/src/schema/contributions.ts (1)
  • contribRollups (19-43)
packages/api/src/providers/github.ts (1)
  • getGithubContributionRollups (145-204)
packages/api/src/providers/gitlab.ts (1)
  • getGitlabContributionRollups (312-414)
packages/auth/src/server.ts (5)
packages/env/src/server.ts (1)
  • env (4-32)
packages/db/src/schema/auth.ts (1)
  • account (39-55)
packages/api/src/leaderboard/meta.ts (1)
  • setUserMeta (11-39)
packages/db/src/index.ts (1)
  • db (16-22)
packages/api/src/leaderboard/useMeta.ts (1)
  • setUserMetaFromProviders (13-36)
scripts/lb-seed-all-users.ts (3)
packages/db/src/index.ts (1)
  • db (16-22)
packages/api/src/leaderboard/useMeta.ts (1)
  • setUserMetaFromProviders (13-36)
packages/api/src/redis/client.ts (1)
  • redis (4-7)
packages/auth/src/leaderboard-hooks.ts (3)
packages/api/src/leaderboard/meta.ts (1)
  • setUserMeta (11-39)
packages/api/src/leaderboard/redis.ts (1)
  • syncUserLeaderboards (15-39)
packages/db/src/index.ts (1)
  • db (16-22)
packages/api/src/redis/lock.ts (1)
packages/api/src/redis/client.ts (1)
  • redis (4-7)
packages/api/src/leaderboard/read.ts (3)
packages/api/src/redis/client.ts (1)
  • redis (4-7)
packages/db/src/index.ts (2)
  • db (16-22)
  • DB (23-23)
packages/db/src/schema/contributions.ts (1)
  • contribRollups (19-43)
packages/api/src/providers/gitlab.ts (1)
packages/api/src/providers/github.ts (2)
  • DateLike (5-5)
  • DateRange (6-6)
apps/web/app/api/internal/leaderboard/backfill/route.ts (5)
packages/env/src/verifyCron.ts (1)
  • isCronAuthorized (3-9)
packages/api/src/leaderboard/aggregator.ts (1)
  • refreshUserRollups (65-169)
packages/api/src/leaderboard/redis.ts (1)
  • syncUserLeaderboards (15-39)
packages/api/src/leaderboard/useMeta.ts (1)
  • setUserMetaFromProviders (13-36)
packages/api/src/redis/lock.ts (4)
  • backfillLockKey (30-32)
  • withLock (16-24)
  • acquireLock (3-6)
  • releaseLock (8-14)
apps/web/app/api/leaderboard/details/route.ts (3)
packages/api/src/leaderboard/read.ts (1)
  • WindowKey (7-7)
packages/db/src/schema/contributions.ts (1)
  • contribRollups (19-43)
packages/db/src/index.ts (1)
  • db (16-22)
apps/web/app/api/leaderboard/dev-backfill/route.ts (2)
apps/web/app/api/internal/leaderboard/backfill/route.ts (1)
  • POST (37-140)
packages/auth/src/server.ts (1)
  • auth (20-183)
apps/web/app/api/leaderboard/export/route.ts (3)
packages/api/src/leaderboard/read.ts (1)
  • getLeaderboardPage (82-103)
packages/db/src/index.ts (1)
  • db (16-22)
packages/api/src/leaderboard/useMeta.ts (1)
  • getUserMetas (38-60)
🔇 Additional comments (18)
packages/env/package.json (1)

8-9: ./verify-cron export path is valid
packages/env/src/verifyCron.ts exists and correctly matches the ./verify-cron export in package.json.

vercel.json (1)

6-10: Schedule daily leaderboard cron at UTC midnight
In vercel.json, update the /api/internal/leaderboard/cron/daily schedule from every minute to once per day at midnight UTC:

-    {
-      "path": "/api/internal/leaderboard/cron/daily",
-      "schedule": "* * * * *"
-    }
+    {
+      "path": "/api/internal/leaderboard/cron/daily",
+      "schedule": "0 0 * * *"
+    }

Also confirm your handler reads the secret from the x-vercel-cron header (Vercel Cron sends X-Vercel-Cron, not an Authorization bearer token) or extend isCronAuthorized to accept both header sources.

scripts/github-test.ts (1)

82-86: Nice UX on token check

Clear error and exit when token missing. LGTM.

packages/db/src/schema/index.ts (1)

14-14: Exporting contributions schema: LGTM

Makes the new schema available to consumers.

packages/api/src/utils/rate-limit.ts (1)

2-2: Verify shared Redis client compatibility with Upstash Ratelimit.

Ensure ../redis/client exports an @upstash/redis client instance; Ratelimit expects that type. Also confirm this import works in your target runtime(s) (Node/Edge) without duplicating connections.

apps/web/app/api/leaderboard/profiles/route.ts (1)

7-7: zod import path is invalid

Use the package root; zod/v4 isn’t a public import.

-import { z } from 'zod/v4';
+import { z } from 'zod';

Likely an incorrect or invalid review comment.

apps/web/app/api/leaderboard/enroll/route.ts (1)

19-23: Only guess GitHub login when the session is from GitHub

Using username for GitHub when the user signed in with GitLab will mis-attribute handles.

Would you like me to gate the guess behind a provider check (e.g., from session/account links) and/or persist gitlabUsername when applicable?

packages/db/drizzle/meta/0023_snapshot.json (1)

2056-2133: Enum present but unused

contrib_provider enum is defined but not referenced by any column. Once provider is added to contrib_rollups, wire this enum there.

apps/web/app/api/leaderboard/details/route.ts (1)

7-7: Confirm Zod import path
The monorepo doesn’t declare a specific Zod version and several modules mix import { z } from 'zod/v4' with plain import { z } from 'zod'. Verify that the installed Zod version supports the /v4 subpath export and then unify all imports to either zod/v4 (if on v4) or switch them to import { z } from 'zod'.

packages/api/src/leaderboard/read.ts (2)

49-58: LGTM: graceful Redis fallback added.

Catches Redis errors and returns [], enabling DB fallback as intended.


73-76: Ensure supporting index exists.

Ordering by contrib_rollups.total filtered by period benefits from an index on (period, total). See schema comment proposing contrib_rollups_period_total_idx.

packages/db/drizzle/0023_bent_strong_guy.sql (1)

1-1: Missing contrib_provider enum vs TS schema.

TS defines contribProvider but SQL doesn’t create it. Either add the enum (and a provider column somewhere) or remove the TS enum to avoid drift.

apps/web/app/api/leaderboard/dev-backfill/route.ts (1)

25-29: Auth check looks good.

Admin/moderator gate via session is appropriate here.

packages/api/src/leaderboard/redis.ts (1)

49-71: ZSET read logic looks robust.

Covers both object-with-scores and interleaved return shapes.

apps/web/app/api/internal/leaderboard/cron/daily/route.ts (2)

8-8: Confirm Zod import path resolves in your toolchain

Most setups import from 'zod' (not 'zod/v4'). If subpath exports aren’t configured, this will fail at runtime.

Apply if needed:

-import { z } from 'zod/v4';
+import { z } from 'zod';

128-143: Daily cron now avoids “today” window — LGTM

The job refreshes rolling 30d/365d rollups and syncs Redis only; no inclusive “today” range. This resolves the earlier concern about processing incomplete day data.

apps/web/app/api/internal/leaderboard/backfill/route.ts (1)

11-16: Verify import paths (locks, use-meta) match actual exports

The repo snippets show these under redis/lock and leaderboard/useMeta. Ensure your package.json exports map supports '@workspace/api/locks' and '@workspace/api/use-meta'; otherwise, these imports will fail.

packages/api/src/leaderboard/aggregator.ts (1)

65-169: Upsert logic and provider routing look solid

Idempotent upserts per (userId, period), total computed once, and provider precedence (GitHub over GitLab) are all reasonable for rollups.

import { auth } from '@workspace/auth/server';
import { env } from '@workspace/env/server';
import { NextRequest } from 'next/server';
import { z } from 'zod/v4';
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix incorrect Zod import path (will fail at build).

zod/v4 is not a valid import entry; use zod.

-import { z } from 'zod/v4';
+import { z } from 'zod';
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
import { z } from 'zod/v4';
import { z } from 'zod';
🤖 Prompt for AI Agents
In apps/web/app/api/leaderboard/dev-backfill/route.ts around line 7, the import
uses an invalid path "zod/v4" which breaks the build; change the import to use
the public package entry "zod" (i.e., replace "import { z } from 'zod/v4';" with
"import { z } from 'zod';") and scan the file (or repo) for any other "zod/v4"
occurrences and update them similarly.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 34

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (5)
packages/api/src/routers/early-submissions.ts (1)

12-26: Input schema is requiring repoId implicitly; make it optional or server-controlled.

createInsertSchema(project) includes repoId (notNull) by default, but the server later computes repoId and ignores any client-sent value. This mismatch will either force clients to send a value you ignore or cause validation failures. Make repoId optional (or omit from the input entirely) for this public route.

 const createProjectInput = createInsertSchema(project)
   .omit({
     id: true,
     ownerId: true,
     statusId: true,
     typeId: true,
     createdAt: true,
     updatedAt: true,
     deletedAt: true,
+    repoId: true,
   })
   .extend({
     status: z.string().min(1, 'Project status is required'),
     type: z.string().min(1, 'Project type is required'),
     tags: z.array(z.string()).default([]),
+    // repoId is computed server-side; accept but don't require (or drop entirely)
+    repoId: z.string().optional(),
   });
apps/web/app/(public)/(projects)/projects/project-card.tsx (1)

40-72: Fix GitLab avatar URL concatenation (avoid broken URLs).

GitLab often returns an absolute namespace.avatar_url. Prepending https://gitlab.com can yield invalid URLs. Normalize the value before passing to <Image>.

Apply this diff in both <Image src={...}> occurrences:

- src={
-   repo?.owner?.avatar_url || `https://gitlab.com${repo?.namespace?.avatar_url}`
- }
+ src={
+   repo?.owner?.avatar_url ??
+   (repo?.namespace?.avatar_url?.startsWith('http')
+     ? repo.namespace.avatar_url
+     : `https://gitlab.com${repo?.namespace?.avatar_url ?? ''}`)
+ }
packages/api/src/routers/submissions.ts (2)

31-57: Owner detection is GitHub-specific; support gitHost and GitLab paths.

Current regex assumes github.com. GitLab supports multi-level groups. At minimum, branch by host and accept the first path segment as “owner”.

-async function checkUserOwnsProject(ctx: Context, gitRepoUrl: string) {
-  const githubUrlRegex = /(?:https?:\/\/github\.com\/|^)([^/]+)\/([^/]+?)(?:\.git|\/|$)/;
-  const match = gitRepoUrl.match(githubUrlRegex);
+async function checkUserOwnsProject(ctx: Context, gitRepoUrl: string, gitHost?: string) {
+  const gh = /^(?:https?:\/\/)?(?:www\.)?github\.com\/([^/]+)\/[^/]+/i;
+  const gl = /^(?:https?:\/\/)?(?:www\.)?gitlab\.com\/([^/]+)\/.+/i; // first group segment
+  const rx = gitHost === 'gitlab' ? gl : gh;
+  const match = gitRepoUrl.match(rx);
-  checkUserOwnsProject: publicProcedure
-    .input(z.object({ gitRepoUrl: z.string() }))
+  checkUserOwnsProject: publicProcedure
+    .input(z.object({ gitRepoUrl: z.string(), gitHost: z.enum(['github','gitlab']).optional() }))
     .query(async ({ ctx, input }) => {
-      return await checkUserOwnsProject(ctx, input.gitRepoUrl);
+      return await checkUserOwnsProject(ctx, input.gitRepoUrl, input.gitHost);
     }),

Also applies to: 76-80


13-13: Use official Zod import subpaths
All import … from 'zod/v4' statements must be updated—either import from 'zod' to match the pinned v3.25.67 release, or upgrade to Zod 4 and import from 'zod/v4/core'.

apps/web/components/user/profile-tabs.tsx (1)

125-139: Make Tabs controlled (use value, not defaultValue)

Using defaultValue with onValueChange is semi-controlled and won’t reflect external tab changes. Switch to value={tab}.

-<Tabs defaultValue={tab} onValueChange={setTab} className="w-full">
+<Tabs value={tab} onValueChange={setTab} className="w-full">
♻️ Duplicate comments (1)
packages/db/drizzle/meta/0022_snapshot.json (1)

1-2425: Align contrib_ user_id types and add missing indexes (repeat).*

When adding contrib_daily and contrib_totals, ensure:

  • user_id is text and FKs to public.user(id) with onDelete: cascade.
  • contrib_daily has a unique index/PK on (user_id, provider, date_utc) plus supporting secondary indexes for rebuilds/rollups.
  • contrib_totals has unique (user_id, provider) and DESC indexes for fast top-N: (provider ASC, all_time DESC), (provider ASC, last_30d DESC), (provider ASC, last_365d DESC).
🧹 Nitpick comments (80)
packages/api/src/routers/profile.ts (1)

60-65: Simplify check; providerId is non-nullable in schema

provider_id is notNull; the extra check is redundant.

-    if (!userAccount || !userAccount.providerId) {
+    if (!userAccount) {
apps/web/components/project/markdown-textarea.tsx (5)

84-84: Avoid fixed 200px height; prefer flexible min/max constraints.

Fixed height hurts responsiveness and accessibility. Let it grow while capping the max height.

-      <div className="h-[200px]">
+      <div className="min-h-[200px] max-h-[60vh]">

90-90: Allow vertical resize and avoid hard 100% height coupling.

Enable users to expand the editor and decouple from a fixed container height.

-            className="h-full resize-none overflow-y-auto rounded-none border-0 p-2"
+            className="min-h-[200px] max-h-[60vh] resize-y overflow-y-auto rounded-none border-0 p-2"

96-100: Make preview region keyboard-focusable and resilient to long words.

Add tabIndex for keyboard navigation and break-words to prevent overflow with long URLs.

-          <div
-            className="h-full overflow-y-auto border-0 p-2"
+          <div
+            className="h-full overflow-y-auto border-0 p-2 break-words"
             role="region"
             aria-label="Markdown preview"
+            tabIndex={0}
           >

106-108: Make split layout responsive and ensure columns shrink properly.

Collapse to one column on small screens to avoid cramped panes.

-          <div className="grid h-full min-w-0 grid-cols-2">
+          <div className="grid h-full min-w-0 grid-cols-1 md:grid-cols-2">

112-112: Mirror write-pane resizing in split mode.

Keep behavior consistent with write mode and allow vertical resizing.

-                className="h-full resize-none overflow-y-auto rounded-none border-0 p-2"
+                className="min-h-[200px] max-h-[60vh] resize-y overflow-y-auto rounded-none border-0 p-2"
README.md (1)

142-142: Document and reference the db:seed-launches script

The db:seed-launches script is defined in the root package.json and in packages/db/package.json. Add a note in the “Seed the database (optional)” section of README.md to reference it, for example:

Also available: bun db:seed-launches — seeds sample project launches only (idempotent; safe to re-run).
packages/db/src/schema/projects.ts (1)

25-25: Add integrity and lookup indexes for repoId (and consider composite uniqueness with gitHost).

repoId will become a primary lookup key for provider APIs. Recommend adding:

  • index on repo_id for fast lookups
  • composite unique on (git_host, repo_id) to prevent cross-project collisions

Also verify migration order: backfill repo_id for all existing rows before enforcing NOT NULL.

Example index additions in the table builder (outside this hunk):

(table) => [
  index('project_status_id_idx').on(table.statusId),
  index('project_type_id_idx').on(table.typeId),
  index('project_stars_count_desc_idx').on(table.starsCount.desc()),
  index('project_forks_count_desc_idx').on(table.forksCount.desc()),
  // new
  index('project_repo_id_idx').on(table.repoId),
  // optional but recommended
  index('unique_project_host_repoid').on(table.gitHost, table.repoId).unique(),
]
apps/web/components/project/launch-project-dialog.tsx (1)

298-302: Preventing Escape/outside-close can hurt accessibility; gate it behind “dirty” state.

Current handlers always block dismissal. Prefer blocking only when the form has unsaved edits and show a confirm.

- <DialogContent
-   className="rounded-none sm:max-w-[500px]"
-   onPointerDownOutside={(e) => e.preventDefault()}
-   onEscapeKeyDown={(e) => e.preventDefault()}
->
+ <DialogContent
+   className="rounded-none sm:max-w-[500px]"
+   onPointerDownOutside={(e) => {
+     if (form.formState?.isDirty) e.preventDefault()
+   }}
+   onEscapeKeyDown={(e) => {
+     if (form.formState?.isDirty) e.preventDefault()
+   }}
+>

Repeat the same pattern for the main Launch dialog.

Also applies to: 411-415

apps/web/components/submissions/early-submission-dialog.tsx (1)

33-37: Allow Escape/outside-close when the form is pristine; confirm when dirty.

Mirroring the Launch dialog advice, block dismissal only when there are unsaved edits to preserve expected keyboard behavior.

- <DialogContent
-   className="max-h-[90vh] max-w-2xl overflow-y-auto rounded-none"
-   onPointerDownOutside={(e) => e.preventDefault()}
-   onEscapeKeyDown={(e) => e.preventDefault()}
->
+ <DialogContent
+   className="max-h-[90vh] max-w-2xl overflow-y-auto rounded-none"
+   onPointerDownOutside={(e) => {
+     const dirty = document.querySelector('form')?.querySelectorAll('[data-dirty="true"]').length
+     if (dirty) e.preventDefault()
+   }}
+   onEscapeKeyDown={(e) => {
+     const dirty = document.querySelector('form')?.querySelectorAll('[data-dirty="true"]').length
+     if (dirty) e.preventDefault()
+   }}
+>
apps/web/next.config.mjs (1)

12-15: Prefer a ternary over boolean-and in object spread for clarity and safety.

Spreading a non-object is coerced but opaque. A ternary avoids surprises and documents intent. Also consider normalizing truthy env values.

-  ...(process.env.NEXT_OUTPUT_STANDALONE === 'true' && {
-    output: 'standalone',
-  }),
+  ...(process.env.NEXT_OUTPUT_STANDALONE === 'true'
+    ? { output: 'standalone' }
+    : {}),
+  // Or, if you want to accept 1/true/yes:
+  // ...( /^(1|true|yes)$/i.test(process.env.NEXT_OUTPUT_STANDALONE ?? '')
+  //   ? { output: 'standalone' }
+  //   : {}),
package.json (2)

36-36: Turbo bump looks fine; check remote cache compatibility.

2.5.x is backward-compatible in most cases. Ensure your CI/remote cache is on the same minor to avoid cache misses.


11-13: Document or provide fallback for Docker Compose v2 plugin
Some environments may still rely on the legacy docker-compose binary. Either add legacy scripts (e.g.

"docker-compose:up":   "docker-compose up -d",
"docker-compose:down": "docker-compose down"

) alongside the plugin-based commands, or clearly document in your README that Docker Compose v2 (the docker compose plugin) is required.

.dockerignore (2)

51-55: Don't ignore Dockerfile/.dockerignore unless you're certain they're unused in COPY steps.

Some multi-stage builds COPY the Dockerfile or rely on it in later stages; excluding can surprise future changes.

-# Docker
-Dockerfile
-docker-compose.yml
-.dockerignore
+# Docker (keep these available to COPY if needed)
+# Dockerfile
+# docker-compose.yml
+# .dockerignore

24-30: Excluding README/docs reduces context size but can impede image self-documentation.

If your Dockerfile copies README/docs into the image (helpful for docker inspect labels or health endpoints), consider keeping them.

apps/web/hooks/use-comment-indentation.ts (2)

5-6: Clamp to a supported visual range and guard against negative depths.

When maxDepth > 6 or depth < 0, current logic yields empty classes. Clamp visualDepth to [0, 6] independent of maxDepth; keep canReply based on provided maxDepth.

-    const clampedDepth = Math.min(depth, maxDepth);
+    const VISUAL_MAX = 6;
+    const visualDepth = Math.max(0, Math.min(depth, VISUAL_MAX, maxDepth));
...
-      indentationClass: indentationClasses[clampedDepth as keyof typeof indentationClasses] || '',
-      borderClass: borderClasses[clampedDepth as keyof typeof borderClasses] || '',
+      indentationClass: indentationClasses[visualDepth as keyof typeof indentationClasses] || '',
+      borderClass: borderClasses[visualDepth as keyof typeof borderClasses] || '',

Also applies to: 28-33


8-16: Deduplicate class maps and hoist constants.

Both maps are static; define them once at module scope and compute border via a simple check.

-import { useMemo } from 'react';
+import { useMemo } from 'react';
+
+const INDENTATION_CLASSES = {
+  0: '', 1: 'ml-4', 2: 'ml-8', 3: 'ml-12', 4: 'ml-16', 5: 'ml-20', 6: 'ml-24',
+} as const;

 export function useCommentIndentation(depth: number, maxDepth: number = 6) {
   return useMemo(() => {
-    // Generate indentation classes based on depth
-    const indentationClasses = {
-      0: '',
-      1: 'ml-4',
-      2: 'ml-8',
-      3: 'ml-12',
-      4: 'ml-16',
-      5: 'ml-20',
-      6: 'ml-24',
-    };
-
-    const borderClasses = {
-      0: '',
-      1: 'border-l border-neutral-700 pl-4',
-      2: 'border-l border-neutral-700 pl-4',
-      3: 'border-l border-neutral-700 pl-4',
-      4: 'border-l border-neutral-700 pl-4',
-      5: 'border-l border-neutral-700 pl-4',
-      6: 'border-l border-neutral-700 pl-4',
-    };
+    const border = 'border-l border-neutral-700 pl-4';
...
-      indentationClass: indentationClasses[visualDepth as keyof typeof indentationClasses] || '',
-      borderClass: borderClasses[visualDepth as keyof typeof borderClasses] || '',
+      indentationClass: INDENTATION_CLASSES[visualDepth as keyof typeof INDENTATION_CLASSES] || '',
+      borderClass: visualDepth >= 1 ? border : '',

Also applies to: 18-26

packages/api/src/driver/types.ts (2)

116-128: Normalize naming to camelCase and make nullable fields explicit.

This interface mixes snake_case and camelCase and forces owner.avatar_url to exist. Align with the rest of the codebase and provider variability.

-export interface UnSubmittedRepo {
-  name: string;
-  repoUrl: string;
-  stars: number;
-  forks: number;
-  isOwner: boolean;
-  description?: string | null;
-  gitHost: 'github' | 'gitlab';
-  owner: {
-    avatar_url: string;
-  };
-  created_at: string;
-}
+export interface UnsubmittedRepo {
+  name: string;
+  repoUrl: string;
+  stars: number;
+  forks: number;
+  isOwner: boolean;
+  description?: string | null;
+  gitHost: 'github' | 'gitlab';
+  owner?: {
+    avatarUrl?: string | null;
+  } | null;
+  createdAt: string;
+}

Follow-up: update imports/usages to the new name and fields.


257-259: Avoid any for ctx; use a shared Context type.

Replace ctx: any and { db: any } with a proper Context/interface to prevent misuse and improve DX.

packages/api/src/utils/content-moderation.ts (2)

11-24: Reduce false positives in normalizeText (use word boundaries).

Current regexes can match inside innocent words (e.g., “class”). Add word boundaries to avoid spurious normalization.

Apply:

-    { pattern: /s\*?h\*?i\*?t/gi, replacement: 'shit' },
+    { pattern: /\bs\*?h\*?i\*?t\b/gi, replacement: 'shit' },
-    { pattern: /f\*?u\*?c\*?k/gi, replacement: 'fuck' },
+    { pattern: /\bf\*?u\*?c\*?k\b/gi, replacement: 'fuck' },
-    { pattern: /f\*?u\*?k/gi, replacement: 'fuk' },
+    { pattern: /\bf\*?u\*?k\b/gi, replacement: 'fuk' },
-    { pattern: /b\*?i\*?t\*?c\*?h/gi, replacement: 'bitch' },
+    { pattern: /\bb\*?i\*?t\*?c\*?h\b/gi, replacement: 'bitch' },
-    { pattern: /h\*?e\*?l\*?l/gi, replacement: 'hell' },
+    { pattern: /\bh\*?e\*?l\*?l\b/gi, replacement: 'hell' },
-    { pattern: /a\*?s\*?s/gi, replacement: 'ass' },
+    { pattern: /\ba\*?s\*?s\b/gi, replacement: 'ass' },
-    { pattern: /p\*?i\*?s\*?s/gi, replacement: 'piss' },
+    { pattern: /\bp\*?i\*?s\*?s\b/gi, replacement: 'piss' },
-    { pattern: /c\*?r\*?a\*?p/gi, replacement: 'crap' },
+    { pattern: /\bc\*?r\*?a\*?p\b/gi, replacement: 'crap' },
-    { pattern: /s\*?t\*?u\*?p\*?i\*?d/gi, replacement: 'stupid' },
+    { pattern: /\bs\*?t\*?u\*?p\*?i\*?d\b/gi, replacement: 'stupid' },
-    { pattern: /i\*?d\*?i\*?o\*?t/gi, replacement: 'idiot' },
+    { pattern: /\bi\*?d\*?i\*?o\*?t\b/gi, replacement: 'idiot' },

2-9: Remove unused interfaces or switch to JSON endpoints.

PurgoMalumResponse and PurgoMalumContainsResponse are unused since you parse text. Remove them or change to JSON endpoints and use the types.

-interface PurgoMalumResponse {
-  result: string;
-}
-
-interface PurgoMalumContainsResponse {
-  containsProfanity: boolean;
-}
+// Removed unused interfaces. Using text endpoints.
apps/web/components/submissions/submission-form.tsx (2)

246-361: Guard against out-of-order repo validation responses.

Rapid edits can race multiple fetchQuery calls, causing stale validation state. Track the latest request and ignore older responses.

Add a ref and compare before setting state:

+  const lastRepoCheckId = useRef(0);
   const validateRepository = useCallback(
     async (repoUrl: string, gitHost: string) => {
+      const reqId = ++lastRepoCheckId.current;
       ...
-      try {
+      try {
         const result = await queryClient.fetchQuery( ... );
-        if (result) {
+        if (reqId !== lastRepoCheckId.current) return; // stale
+        if (result) {
           try {
             const duplicateCheck = await queryClient.fetchQuery( ... );
-            if (duplicateCheck.exists) {
+            if (reqId !== lastRepoCheckId.current) return; // stale
+            if (duplicateCheck.exists) {
               setRepoValidation({ ... });
               return;
             } else {
               setRepoValidation({ ... });
             }
             ...
           } catch (duplicateError) {
+            if (reqId !== lastRepoCheckId.current) return; // stale
             setRepoValidation({ ... });
             ...
           }
         }
       } catch (error) {
+        if (reqId !== lastRepoCheckId.current) return; // stale
         setRepoValidation({ ... });
       }

334-340: Polish the fallback message.

User-facing copy “awkward..” reads informal. Prefer a neutral tone.

-              message: 'Repository found! (could not verify if already submitted, awkward..)',
+              message: 'Repository found (duplicate check unavailable right now).',
apps/web/components/comments/reply-form.tsx (2)

108-110: Optional: render a sign-in nudge instead of returning null.

Returning null hides the reply box entirely for signed-out users. Consider a small inline “Log in to reply” CTA to improve discoverability.


163-174: Disable submit until the form is valid/dirty.

Prevents empty submissions and improves UX feedback before validation runs.

-            <Button
+            <Button
               type="submit"
               size="sm"
-              disabled={isReplyPending}
+              disabled={isReplyPending || !form.formState.isDirty || !form.formState.isValid}
               className="gap-2 rounded-none"
             >

Note: set useForm({ mode: 'onChange' }) if you want immediate validity.

Dockerfile (2)

40-41: Avoid Basic-Auth-shaped placeholders in DATABASE_URL.

These defaults trip secret scanners. Use a neutral placeholder that doesn’t include user:pass@.

-ARG DATABASE_URL=postgresql://placeholder:placeholder@placeholder:5432/placeholder
+ARG DATABASE_URL=postgresql://localhost:5432/db

116-117: Prefer executing the server directly instead of “bun run” for a file path.

Minor: bun run is primarily for npm scripts; calling the file directly is simpler.

-CMD ["bun", "run", "apps/web/server.js"]
+CMD ["bun", "apps/web/server.js"]
packages/db/drizzle/0022_green_rattler.sql (1)

1-12: Create the unique index in the initial migration to avoid the follow-up fix.

Since this table is new in this PR, prefer making the index unique here and drop 0023.

-CREATE INDEX "project_comment_like_unique_idx" ON "project_comment_like" USING btree ("comment_id","user_id");
+CREATE UNIQUE INDEX "project_comment_like_unique_idx" ON "project_comment_like" USING btree ("comment_id","user_id");

If you keep 0023 for history, no action needed.

packages/db/drizzle/0022_married_starbolt.sql (3)

1-3: Unused enum and missing extension guard

contrib_provider is introduced but not used in any table here. Also, gen_random_uuid() requires pgcrypto. Add the extension guard and either use or drop the enum in this migration.

Apply:

+-- required for gen_random_uuid()
+CREATE EXTENSION IF NOT EXISTS pgcrypto;
+
 CREATE TYPE "public"."contrib_period" AS ENUM('all_time', 'last_30d', 'last_365d');--> statement-breakpoint
 CREATE TYPE "public"."contrib_provider" AS ENUM('github', 'gitlab');--> statement-breakpoint
 CREATE TYPE "public"."notification_type" AS ENUM('launch_scheduled', 'launch_live', 'comment_received');--> statement-breakpoint

15-25: Notification schema OK; consider FK and update semantics

Looks good. Minor: if notifications must be user-scoped always, the FK is correct. Ensure updated_at is bumped on update (trigger) if you plan to edit rows.

I can add a simple SET updated_at = now() trigger if desired.


27-34: Indexing for unread-by-user fast path

Common queries are "fetch recent unread for user" and "count unread". A composite index serves both better than separate indexes.

Apply:

 CREATE INDEX "notification_user_id_idx" ON "notification" USING btree ("user_id");--> statement-breakpoint
-CREATE INDEX "notification_read_idx" ON "notification" USING btree ("read");--> statement-breakpoint
 CREATE INDEX "notification_type_idx" ON "notification" USING btree ("type");--> statement-breakpoint
 CREATE INDEX "notification_created_at_idx" ON "notification" USING btree ("created_at" DESC NULLS LAST);
+CREATE INDEX "notification_user_read_created_idx"
+  ON "notification" ("user_id","read","created_at" DESC);--> statement-breakpoint

Optional: drop notification_user_id_idx later if the composite covers all access.

apps/web/app/(public)/(projects)/projects/[id]/project-page.tsx (1)

36-36: Import RepoContent as a type to avoid runtime import; keep style consistent

Avoids bundling a non-value export and aligns with TS best practices.

-import {isValidProvider, RepoContent} from '@/lib/constants'
+import { isValidProvider } from '@/lib/constants';
+import type { RepoContent } from '@/lib/constants';
apps/web/components/layout/public-nav.tsx (1)

46-46: Placement LGTM; matches mobile header order

Keep it next to ScheduledLaunches for consistency. Consider dynamic import if bundle size becomes a concern.

packages/db/drizzle/0025_add_repo_id_index.sql (1)

1-2: Confirm existing index in 0024 and sync Drizzle schema

  • Migration 0024_free_khan.sql already creates "project_repo_id_idx" on project(repo_id); IF NOT EXISTS makes 0025_add_repo_id_index.sql a no-op—consider removing this redundant migration.
  • Add index('project_repo_id_idx').on(table.repoId) to packages/db/src/schema/projects.ts so the Drizzle schema matches the database.
apps/web/forms/index.ts (1)

14-24: Omitting repoId from client input is correct; optionally harden schema.

  • Good call to exclude repoId so it’s server-derived and not client-injected.
  • Optional: enforce unknown-key rejection to prevent stray fields (including repoId) from slipping through if defaults change.

Apply strictness to both forms:

 export const earlySubmissionForm = createInsertSchema(project)
   .omit({
     id: true,
     ownerId: true,
     statusId: true,
     typeId: true,
     createdAt: true,
     updatedAt: true,
     deletedAt: true,
     repoId: true,
   })
+  .strict()
   .extend({
@@
 export const submisionForm = createInsertSchema(project)
   .omit({
     id: true,
     ownerId: true,
     statusId: true,
     typeId: true,
     createdAt: true,
     updatedAt: true,
     deletedAt: true,
     repoId: true,
   })
+  .strict()
   .extend({

Also applies to: 47-57

apps/web/lib/constants.ts (1)

19-23: Type-narrow to local ProviderId to avoid DB type coupling

Use ProviderId as the guard’s predicate type; keeps the contract stable without schema imports.

-export const isValidProvider = (
-  provider: string | null | undefined,
-): provider is (typeof projectProviderEnum.enumValues)[number] => {
+export const isValidProvider = (
+  provider: string | null | undefined,
+): provider is ProviderId => {
   return provider === 'github' || provider === 'gitlab';
 };
apps/web/app/(public)/launches/launch-card.tsx (1)

16-16: Good move to centralize provider guard

Importing isValidProvider reduces duplication.

Since isValidProvider narrows to ProviderId, you can drop the explicit cast at Line 37 and let the guard drive the type.

packages/db/drizzle/0024_free_khan.sql (1)

16-17: Revisit index characteristics and build mode

If repo_id is logically unique per project, prefer a UNIQUE index. Also consider CONCURRENTLY for large tables to reduce write locks (only if your migration runner executes outside a transaction).

-CREATE INDEX IF NOT EXISTS "project_repo_id_idx" ON "project" ("repo_id");
+-- Use UNIQUE if appropriate for your domain
+CREATE UNIQUE INDEX IF NOT EXISTS "project_repo_id_idx" ON "project" ("repo_id");
+-- Or, if non-unique is intended but the table is large and your runner allows it:
+-- CREATE INDEX CONCURRENTLY IF NOT EXISTS "project_repo_id_idx" ON "project" ("repo_id");
apps/web/hooks/use-notifications.ts (1)

69-79: Leverage React Query’s focus refetch instead of manual visibility listener

React Query already supports refetchOnWindowFocus. Unless you need custom logic, you can drop this effect and enable the built-in behavior on each query.

-// Visibility handling
-useEffect(() => {
-  const handleVisibilityChange = () => {
-    if (document.visibilityState === 'visible') {
-      refetch();
-    }
-  };
-
-  document.addEventListener('visibilitychange', handleVisibilityChange);
-  return () => document.removeEventListener('visibilitychange', handleVisibilityChange);
-}, [refetch]);
+// Consider relying on React Query defaults:
+// set `refetchOnWindowFocus: true` in queryOptions if needed and remove this effect.
apps/web/app/(public)/launches/[id]/components/launch-comments.tsx (1)

45-49: Optional: autofocus the composer for empty threads

Minor UX: pass autoFocus when there are zero comments to encourage first posts.

-        <ReplyForm
+        <ReplyForm
           projectId={projectId}
           placeholder="Add your comment..."
           className="w-full"
+          autoFocus={!(comments?.length)}
         />
apps/web/app/(admin)/admin/page.tsx (3)

24-26: Avoid brittle query keys when invalidating.

Use the queryOptions-derived key to stay in sync with TRPC’s key generator.

Apply this diff:

 const trpc = useTRPC();
- const queryClient = useQueryClient();
+ const queryClient = useQueryClient();
+ const dashboardQueryKey = trpc.admin.dashboard.queryOptions().queryKey;
@@
-        queryClient.invalidateQueries({ queryKey: ['admin.dashboard'] });
+        queryClient.invalidateQueries({ queryKey: dashboardQueryKey });

Also applies to: 33-34


27-41: Defensive toast message to handle shape variations.

Guard against missing totals to avoid runtime stringifying of undefined.

Apply this diff:

-      onSuccess: (data) => {
-        toast.success('Repository IDs Updated', {
-          description: `Updated: ${data.totals.updated}, Failed: ${data.totals.failed}, Skipped: ${data.totals.skipped}`,
-        });
+      onSuccess: (data) => {
+        const totals = (data as any)?.totals ?? (data as any) ?? {};
+        toast.success('Repository IDs Updated', {
+          description: `Updated: ${totals.updated ?? 0}, Failed: ${totals.failed ?? 0}, Skipped: ${totals.skipped ?? 0}`,
+        });

147-155: Improve accessibility for loading state.

Expose busy state to assistive tech.

Apply this diff:

 <Button
   className="w-full justify-start"
   variant="outline"
   onClick={() => updateRepoIds.mutate({ gitHost: 'all' })}
   disabled={updateRepoIds.isPending}
+  aria-busy={updateRepoIds.isPending}
 >
packages/api/src/routers/notifications.ts (2)

15-22: Simplify Zod defaults.

.default({}) already handles undefined; .optional() is redundant and widens the type.

Apply this diff:

-  getAll: protectedProcedure
-    .input(
-      z.object({
+  getAll: protectedProcedure
+    .input(
+      z.object({
         limit: z.number().min(1).max(100).default(50),
         offset: z.number().min(0).default(0),
         unreadOnly: z.boolean().default(false),
-      }).optional().default({})
+      }).default({})
     )

54-57: Return a lightweight result for bulk update.

Returning all updated rows can be heavy; return a count instead.

Apply this diff:

-  markAllAsRead: protectedProcedure
-    .mutation(async ({ ctx }) => {
-      return await markAllNotificationsAsRead(ctx.user.id);
-    }),
+  markAllAsRead: protectedProcedure
+    .mutation(async ({ ctx }) => {
+      const res = await markAllNotificationsAsRead(ctx.user.id);
+      return { updatedCount: res.length };
+    }),
apps/web/components/user/profile.tsx (2)

40-52: Gate the query with validated provider.

Avoid calling the endpoint when provider is invalid/missing.

Apply this diff:

   const { data: unSubmittedProjects } = useQuery(
     trpc.projects.getUnSubmitted.queryOptions(
       {
-        provider: profile?.git.provider as 'github' | 'gitlab',
+        provider: profile?.git.provider as 'github' | 'gitlab',
         username: profile?.username ?? '',
         userId: profile?.id ?? '',
       },
       {
-        enabled: !!profile?.git.provider && !!profile?.username && !!profile?.id,
+        enabled:
+          isValidProvider(profile?.git?.provider) &&
+          !!profile?.username &&
+          !!profile?.id,
       },
     ),
   );

76-87: Run README fetch only for supported providers.

If GitLab READMEs aren’t supported by repository.getReadme, restrict to GitHub to avoid spurious errors.

Apply this diff if only GitHub is supported:

   const { data: profileReadme, isPending: isReadmeLoading } = useQuery(
     trpc.repository.getReadme.queryOptions(
       {
         url: `${profile?.username}/${profile?.username}`,
         provider: profile?.git?.provider as (typeof projectProviderEnum.enumValues)[number],
       },
       {
-        enabled: !!profile?.username && isValidProvider(profile?.git?.provider),
+        enabled: !!profile?.username && profile?.git?.provider === 'github',
         retry: false,
       },
     ),
   );
packages/api/src/routers/early-submissions.ts (3)

116-117: Remove non-null assertion after adding guard; keep the type safe.

-          repoId: repoId!,
+          repoId,

156-167: Count query doesn’t need error handling; return zero safely.

count() always returns one row. You can simplify and avoid throwing.

-    const earlySubmissionsCount = await ctx.db.select({ count: count() }).from(project);
-    if (!earlySubmissionsCount[0]) {
-      throw new TRPCError({
-        code: 'INTERNAL_SERVER_ERROR',
-        message: 'Failed to get waitlist count',
-      });
-    }
-    return {
-      count: earlySubmissionsCount[0].count,
-    };
+    const [{ count: total }] = await ctx.db.select({ count: count() }).from(project);
+    return { count: total ?? 0 };

59-66: Normalize repository slug casing to avoid accidental duplicates.

GitHub/GitLab slugs are case-insensitive. Normalize input.gitRepoUrl to lower-case (and trim “.git”) before duplicate checks and writes.

-    // Validate repository and get privacy status
+    // Validate repository and get privacy status
+    input.gitRepoUrl = input.gitRepoUrl.trim().replace(/\.git$/i, '').toLowerCase();

Also applies to: 112-115

apps/web/components/comments/comment-actions.tsx (2)

56-66: Add aria-pressed and explicit type to Like button for accessibility and form-safety.

Improves a11y for toggle semantics and prevents accidental form submission if used inside a form.

-      <Button
+      <Button
         variant="ghost"
         size="sm"
         onClick={handleLikeClick}
         disabled={isLiking}
+        type="button"
         className={`h-auto p-0 border-0 outline-none focus:outline-none bg-transparent hover:bg-transparent active:bg-transparent shadow-none focus:ring-0 transition-colors ${isLiked
           ? 'text-red-500 hover:text-red-400'
           : 'text-neutral-400 hover:text-white'
           } ${isLiking ? 'opacity-50 cursor-not-allowed' : ''}`}
         aria-label={`${isLiked ? 'Unlike' : 'Like'} this comment`}
+        aria-pressed={isLiked}
       >

86-106: Optional: wire aria-controls for collapse to the replies container id.

If the caller can pass a repliesContainerId, hook it here to improve screen-reader context.

-        <Button
+        <Button
           variant="ghost"
           size="sm"
           onClick={onToggleCollapse}
           className="h-auto p-0 text-neutral-400 hover:text-white transition-colors focus:outline-none focus:ring-0"
           aria-label={`${isCollapsed ? 'Expand' : 'Collapse'} ${totalReplyCount} replies`}
           aria-expanded={!isCollapsed}
+          // aria-controls={repliesContainerId}
         >
packages/db/src/schema/notifications.ts (2)

21-27: Prefer a safer JSONB payload type.

Avoid any in data; use Record<string, unknown> to keep type safety while staying flexible.

-    data: jsonb('data').$type<{
+    data: jsonb('data').$type<{
       projectId?: string;
       commentId?: string;
       launchId?: string;
-      [key: string]: any;
+      [key: string]: unknown;
     }>(),

34-40: Add a composite index for common unread-per-user queries.

You already index userId, read, createdAt separately. A composite (user_id, read, created_at DESC) will speed “fetch unread for user ordered by time”.

   (t) => [
     index('notification_user_id_idx').on(t.userId),
     index('notification_read_idx').on(t.read),
     index('notification_type_idx').on(t.type),
     index('notification_created_at_idx').on(t.createdAt.desc()),
+    index('notification_user_read_created_at_idx').on(t.userId, t.read, t.createdAt.desc()),
   ],
apps/web/components/admin/project-edit-form.tsx (2)

15-21: repoId required in schema aligns with server, but ensure it’s always set before submit.

You populate repoId on successful validation. Consider marking it as non-empty at submit time to avoid silent empty-string cases.

-  repoId: z.string(),
+  repoId: z.string().min(1, 'Repository ID is required after validation'),

168-176: Slug validation message is good; normalize and lowercase input before validate.

Minor UX: trim, strip trailing “.git”, and lowercase before formatRegex test to match backend expectations.

-      const formatRegex = /^[a-zA-Z0-9_-]+\/[a-zA-Z0-9._-]+$/;
+      repoUrl = repoUrl.trim().replace(/\.git$/i, '').toLowerCase();
+      const formatRegex = /^[a-z0-9_-]+\/[a-z0-9._-]+$/;
packages/api/src/driver/github.ts (2)

1044-1050: Filtering only by ownerId misses repos already submitted by others.

If the intent is “repos not yet in our catalog,” you should exclude any repo whose gitRepoUrl exists, regardless of ownerId. If intent is “not submitted by this user,” current logic is correct. Please confirm and adjust.

-            const submittedRepos = await ctx.db.query.project.findMany({
-              where: (project, { eq }) => eq(project.ownerId, userId),
-            });
+            // Option A: exclude repos already in our catalog (regardless of owner)
+            const submittedRepos = await ctx.db.query.project.findMany({
+              columns: { gitRepoUrl: true },
+            });
             const notSubmitted = allRepos.filter((repo) => {
               return !submittedRepos.some((submitted) => submitted.gitRepoUrl === repo.full_name);
             });

31-36: Type ctx.db and consider throttling plugin.

Minor: use your DB type instead of any; consider @octokit/plugin-throttling instead of manual sleep.

apps/web/components/user/contribution-graph.tsx (1)

103-109: Memoize grid/month computations to avoid recomputation on each render.

These are pure functions dependent on contributions/grid. Memoization will trim render cost for 52+ columns.

+ import { useMemo } from 'react';
...
- const grid = getContributionGrid();
+ const grid = useMemo(getContributionGrid, [contributions]);
  const weeks = grid.length > 0 && grid[0] ? grid[0].length : 0;
- const getMonthColumns = () => {
+ const getMonthColumns = () => {
    ...
  };
+ const monthCols = useMemo(getMonthColumns, [grid, weeks]);
...
- {getMonthColumns().map((col) => (
+ {monthCols.map((col) => (

Also applies to: 230-241

packages/api/src/driver/gitlab.ts (3)

55-59: Simplify the skip condition.

Comparing repoId to gitRepoUrl is irrelevant and always true when repoId is numeric. Keep the intent: skip when repoId is already a numeric ID.

- if (p.repoId && !isNaN(Number(p.repoId)) && p.repoId !== p.gitRepoUrl) {
+ if (p.repoId && !Number.isNaN(Number(p.repoId))) {
    result.skipped++;
    continue;
  }

1019-1021: Filter the account by provider to avoid cross-provider collisions.

A user with multiple accounts could share the same accountId across providers. Constrain to GitLab.

- const userAccount = await ctx.db.query.account.findFirst({
-   where: (account, { eq }) => eq(account.accountId, currentUser.id),
- });
+ const userAccount = await ctx.db.query.account.findFirst({
+   where: (account, { and, eq }) =>
+     and(eq(account.accountId, currentUser.id), eq(account.providerId, 'gitlab')),
+ });

1015-1019: Paginate GitLab projects when listing membership.

perPage: 100 caps results; users with >100 projects will see incomplete data.

Consider iterating pages until exhaustion (or a reasonable cap), similar to your MR/issue pagination approach.

apps/web/app/(public)/launches/[id]/components/launch-sidebar.tsx (2)

69-72: Use a real repo URL validator instead of a truthy check.

The current Boolean check will enable queries for malformed URLs. Prefer a shared validator and avoid name shadowing.

- import {isValidProvider} from '@/lib/constants'
+ import { isValidProvider, isValidRepoUrl } from '@/lib/constants'
- // Validate gitRepoUrl and gitHost before using them in queries
- const isValidRepoUrl = Boolean(project?.gitRepoUrl && project.gitRepoUrl.trim() !== '');
- const isValidGitHost = isValidProvider(project?.gitHost);
+ // Validate gitRepoUrl and gitHost before using them in queries
+ const repoUrlOk = isValidRepoUrl(project?.gitRepoUrl);
+ const isValidGitHost = isValidProvider(project?.gitHost);
-        enabled: isValidRepoUrl && isValidGitHost,
+        enabled: repoUrlOk && isValidGitHost,
-        enabled: !!repoQuery.data && isValidRepoUrl && isValidGitHost,
+        enabled: !!repoQuery.data && repoUrlOk && isValidGitHost,

Also applies to: 37-37


152-179: Broaden cache invalidation after edits to keep lists fresh.

Only getLaunchByProjectId is invalidated. Include list queries users commonly view so edits reflect immediately.

   onSuccess: () => {
     toast.success('Launch description updated');
     queryClient.invalidateQueries({
       queryKey: trpc.launches.getLaunchByProjectId.queryKey({ projectId }),
     });
+    queryClient.invalidateQueries({ queryKey: trpc.launches.getTodayLaunches.queryKey(), exact: false });
+    queryClient.invalidateQueries({ queryKey: trpc.launches.getYesterdayLaunches.queryKey(), exact: false });
+    queryClient.invalidateQueries({ queryKey: trpc.launches.getAllLaunches.queryKey(), exact: false });
     setEditOpen(false);
   },

Apply the same additions to the scheduled edit mutation’s onSuccess.

packages/db/drizzle/0022_premium_zaladane.sql (1)

3-11: updated_at won’t auto-bump without a trigger.

You set DEFAULT now() but no ON UPDATE mechanism. Either rely on application code consistently (as services currently do) or add a small trigger to auto-update updated_at on row changes.

Can provide a trigger migration if you want to automate this.

packages/api/src/routers/submissions.ts (2)

219-219: Await cache invalidation and move it outside the transaction.

Running this fire-and-forget inside the transaction may race with commit and swallow errors.

-      invalidateCache(`${input.gitHost as 'github' | 'gitlab'}:unsubmitted:${ownerCheck.owner}`);
+      // moved below transaction

Add after the transaction (outside the callback):

await invalidateCache(`${input.gitHost as 'github' | 'gitlab'}:unsubmitted:${ownerCheck.owner}`);

60-75: Consider rate-limiting checkDuplicateRepo.

It calls provider APIs when gitHost is present. Add a lightweight limiter to prevent abuse.

packages/db/src/services/notifications.ts (2)

52-68: Query pattern is fine; consider a composite index if unread filtering is hot.

You often query by userId and read=false. A composite index on (user_id, read, created_at DESC) will serve both list and count efficiently.


89-100: Avoid returning full rows for markAll as read (payload bloat).

If the caller doesn’t need all fields, return only ids or a count.

-  const updatedNotifications = await db
+  const updated = await db
     .update(notification)
     .set({
       read: true,
       updatedAt: new Date(),
     })
     .where(eq(notification.userId, userId))
-    .returning();
+    .returning({ id: notification.id });
 
-  return updatedNotifications;
+  return updated;
apps/web/components/comments/comment-thread.tsx (4)

92-104: Remove debug logs and rely on mutation callbacks

Console logging in onSuccess/onError is noisy in production. The mutation already handles cache invalidation and UI should surface errors via toasts. Drop logs.

-  onSuccess: (data) => {
-    console.log('Like toggle successful:', data);
+  onSuccess: () => {
     queryClient.invalidateQueries({
       queryKey: trpc.launches.getComments.queryKey({ projectId }),
     });
   },
-  onError: (error) => {
-    console.error('Like toggle failed:', error);
-  },
+  onError: () => {},

135-143: try/catch around mutate is ineffective; remove and surface errors via onError

mutate is async and won’t throw synchronously. The try/catch never fires. Call toggleLike directly.

-  const handleLikeClick = useCallback(async () => {
-    console.log('Like button clicked for comment:', comment.id);
-    try {
-      toggleLike({ commentId: comment.id });
-      console.log('Toggle like mutation called');
-    } catch (error) {
-      console.error('Failed to toggle like:', error);
-    }
-  }, [comment.id, toggleLike]);
+  const handleLikeClick = useCallback(() => {
+    toggleLike({ commentId: comment.id });
+  }, [comment.id, toggleLike]);

15-29: Loosen createdAt typing to match serialized data

If tRPC isn’t using superjson, createdAt will be a string. Typing as string | Date avoids runtime/TS mismatches.

 export interface CommentWithReplies {
   id: string;
   content: string;
   parentId: string | null;
-  createdAt: Date;
+  createdAt: Date | string;

No code changes needed where you already wrap with new Date(...).

Also applies to: 171-173


46-51: Remove unused prop (hasReplies) from CommentContainer

The prop is passed but not used. Trim for clarity.

-interface CommentContainerProps {
-  depth: number;
-  isLast: boolean;
-  hasReplies: boolean;
-  children: React.ReactNode;
-}
+interface CommentContainerProps {
+  depth: number;
+  isLast: boolean;
+  children: React.ReactNode;
+}
-  <CommentContainer
+  <CommentContainer
     key={reply.id}
     depth={depth + 1}
     isLast={index === comment.replies!.length - 1}
-    hasReplies={Boolean(reply.replies && reply.replies.length > 0)}
   >

Also applies to: 231-236

apps/web/components/user/profile-tabs.tsx (2)

145-149: Normalize README encoding values

API may return variants like utf-8/UTF8. Normalize before passing to MarkdownContent to avoid wrong fallback to base64.

-  <MarkdownContent
-    content={profileReadme.content}
-    encoding={
-      profileReadme.encoding === 'base64' || profileReadme.encoding === 'utf8'
-        ? profileReadme.encoding
-        : 'base64'
-    }
-  />
+  <MarkdownContent
+    content={profileReadme.content}
+    encoding={(() => {
+      const enc = profileReadme.encoding.toLowerCase().replace(/[-_]/g, '');
+      return enc === 'base64' || enc === 'utf8' ? (enc as 'base64' | 'utf8') : 'base64';
+    })()}
+  />

268-294: Minor layout nit: avoid mixing gap-* with space-y-* on same grid

Using both can double vertical spacing. Keep gap-* only.

-<div className="grid grid-cols-1 gap-4 space-y-4">
+<div className="grid grid-cols-1 gap-4">
packages/api/src/routers/projects.ts (2)

64-71: Type ctx and simplify error handling in fetchRepoMetric

Avoid any and nested try/catch. This tightens types and reduces noise.

-async function fetchRepoMetric(
+async function fetchRepoMetric(
   gitRepoUrl: string | null,
   gitHost: string | null,
   metric: 'stars' | 'forks',
-  ctx: any,
+  ctx: Context,
 ): Promise<number | null> {
   if (!gitRepoUrl || !gitHost) {
     return null;
   }
-  try {
+  try {
     let identifier: string;
@@
-    try {
-      const driver = await getActiveDriver(gitHost as 'github' | 'gitlab', ctx);
-      const repo = await driver.getRepo(identifier);
+    const driver = await getActiveDriver(gitHost as 'github' | 'gitlab', ctx);
+    const repo = await driver.getRepo(identifier);
@@
-    } catch (repoError) {
-      console.error(`Failed to fetch ${metric} for ${identifier}:`, repoError);
-      return null;
-    }
-
-    return null;
+    return null;
   } catch (e) {
     console.error(`Failed to fetch ${metric} for ${gitRepoUrl}:`, e);
     return null;
   }
 }

Also applies to: 84-101


1013-1016: Drop stray console log in server code

Avoid leaking internals to logs; use a logger with levels if needed.

-              console.log(`Fetched repo_id ${repoId} for ${input.gitRepoUrl}`);
packages/api/src/routers/launches.ts (3)

27-65: Idempotent status flip looks good; add minimal guard against duplicate notifications

Concurrent callers could still attempt notifications for the same launch if the UPDATE races (rare). Consider a unique constraint or a transactional lock when selecting updatedLaunches, or catch duplicate-notification errors explicitly.


918-926: Moderation: return filtered content hint to caller (optional)

You currently reject on profanity. Alternatively, you could return the filtered content so the client can offer “use filtered” UX. Not required, just an option.


1039-1075: Like toggle is sound; minor optimization

After toggling, you count likes with a full count(*). Consider returning likeCount = inserted.length > 0 ? prev + 1 : Math.max(prev - 1, 0) via CTE to save one query, if this becomes hot.

Comment on lines 18 to 21
# Version control
.git
.gitignore
.gitattributes
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Ignoring .git can break Sentry release auto-detection and similar tooling.

If you rely on git metadata during Docker builds (e.g., Sentry releases), either pass an explicit release (SENTRY_RELEASE/COMMIT_SHA) or allow .git in the build context.

Quick check for reliance:


🏁 Script executed:

#!/bin/bash
rg -n "SENTRY_RELEASE|COMMIT_SHA|propose-version|releases" -C2

Length of output: 2947


🏁 Script executed:

#!/bin/bash
set -e
rg -n -i sentry -C2

Length of output: 27021


Preserve .git or explicitly set Sentry release
Ignoring the .git directory in .dockerignore (lines 18–21) prevents the Sentry Webpack plugin from auto-detecting the current commit as the release. Either remove .git from the build context or add a build-arg for SENTRY_RELEASE (or COMMIT_SHA) to ensure accurate release tracking.

🤖 Prompt for AI Agents
In .dockerignore around lines 18–21, ignoring the .git directory prevents
Sentry’s Webpack plugin from auto-detecting the current commit for release
tracking; either remove the `.git` entry from .dockerignore so the repo metadata
is available at build time, or (preferably) keep `.git` ignored and add an
explicit build-time release variable: add a Dockerfile ARG like SENTRY_RELEASE
(or COMMIT_SHA) and update your CI/build pipeline to pass --build-arg
SENTRY_RELEASE=<commit_sha> (or set COMMIT_SHA) so the Sentry plugin receives
the correct release value.

data-slot="textarea"
className={cn(
'border-input placeholder:text-muted-foreground focus-visible:border-ring focus-visible:ring-ring/50 aria-invalid:ring-destructive/20 dark:aria-invalid:ring-destructive/40 aria-invalid:border-destructive dark:bg-input/30 flex field-sizing-content min-h-16 w-full rounded-md border bg-transparent px-3 py-2 text-base shadow-xs transition-[color,box-shadow] outline-none focus-visible:ring-[3px] disabled:cursor-not-allowed disabled:opacity-50 md:text-sm',
'border-input placeholder:text-muted-foreground flex min-h-16 w-full rounded-md bg-transparent px-3 py-2 text-base shadow-xs transition-[color,box-shadow] outline-none disabled:cursor-not-allowed disabled:opacity-50 md:text-sm',
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Restore a visible focus indicator for accessibility.

The base Textarea sets outline-none and removes any visible focus cue, which fails WCAG 2.4.7 for keyboard users. Either keep the default outline or add a custom focus style.

Option A — keep browser outline:

-        'border-input placeholder:text-muted-foreground flex min-h-16 w-full rounded-md bg-transparent px-3 py-2 text-base shadow-xs transition-[color,box-shadow] outline-none disabled:cursor-not-allowed disabled:opacity-50 md:text-sm',
+        'border-input placeholder:text-muted-foreground flex min-h-16 w-full rounded-md bg-transparent px-3 py-2 text-base shadow-xs transition-[color,box-shadow] disabled:cursor-not-allowed disabled:opacity-50 md:text-sm',
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
'border-input placeholder:text-muted-foreground flex min-h-16 w-full rounded-md bg-transparent px-3 py-2 text-base shadow-xs transition-[color,box-shadow] outline-none disabled:cursor-not-allowed disabled:opacity-50 md:text-sm',
'border-input placeholder:text-muted-foreground flex min-h-16 w-full rounded-md bg-transparent px-3 py-2 text-base shadow-xs transition-[color,box-shadow] disabled:cursor-not-allowed disabled:opacity-50 md:text-sm',
🤖 Prompt for AI Agents
In apps/web/components/project/markdown-textarea.tsx around line 22, the
textarea class list includes "outline-none" which removes the browser focus
indicator and breaks keyboard accessibility; remove "outline-none" (or replace
it with a visible custom focus utility such as "focus:outline focus:outline-2
focus:outline-primary" or "focus:ring focus:ring-primary/50") so keyboard users
get a clear focus cue and WCAG 2.4.7 is satisfied.


return (
<div className={cn('rounded-md border', className)}>
<div className={cn('overflow-hidden rounded-md border', className)}>
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add focus-within ring on the wrapper to provide a consistent focus style.

This keeps a clear focus cue even when the textarea has no border and the wrapper clips overflow.

-    <div className={cn('overflow-hidden rounded-md border', className)}>
+    <div
+      className={cn(
+        'overflow-hidden rounded-md border focus-within:ring-2 focus-within:ring-ring focus-within:ring-offset-2 ring-offset-background',
+        className,
+      )}
+    >
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
<div className={cn('overflow-hidden rounded-md border', className)}>
<div
className={cn(
'overflow-hidden rounded-md border focus-within:ring-2 focus-within:ring-ring focus-within:ring-offset-2 ring-offset-background',
className,
)}
>
🤖 Prompt for AI Agents
In apps/web/components/project/markdown-textarea.tsx around line 51, the wrapper
div lacks a focus-within ring so focus can be lost when the textarea itself has
no border and the wrapper clips overflow; update the wrapper's className to add
Tailwind focus-within ring classes (e.g. focus-within:ring-2 and a project
primary ring color like focus-within:ring-primary-500, plus an optional
focus-within:ring-offset-0 or ring-offset-background token) so the wrapper shows
a consistent focus ring when the textarea (or any child) receives focus.

Comment on lines 5 to 15
UPDATE "project"
SET "repo_id" =
CASE
WHEN "git_host" IS NOT NULL THEN "git_host" || ':' || "git_repo_url"
ELSE 'unknown:' || "git_repo_url"
END
WHERE "repo_id" IS NULL;

-- After all values are populated, add the NOT NULL constraint
ALTER TABLE "project" ALTER COLUMN "repo_id" SET NOT NULL;

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Migration can fail when git_repo_url is NULL

'unknown:' || git_repo_url yields NULL if git_repo_url is NULL, violating the subsequent NOT NULL constraint.

Apply COALESCE to ensure non-null values and consider normalizing case:

 UPDATE "project"
 SET "repo_id" =
   CASE
-    WHEN "git_host" IS NOT NULL THEN "git_host" || ':' || "git_repo_url"
-    ELSE 'unknown:' || "git_repo_url"
+    WHEN "git_host" IS NOT NULL THEN lower("git_host") || ':' || coalesce("git_repo_url", '')
+    ELSE 'unknown:' || coalesce("git_repo_url", '')
   END
 WHERE "repo_id" IS NULL;

Optionally, guard against empty strings before enforcing NOT NULL (or set a default sentinel like 'unknown:').

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
UPDATE "project"
SET "repo_id" =
CASE
WHEN "git_host" IS NOT NULL THEN "git_host" || ':' || "git_repo_url"
ELSE 'unknown:' || "git_repo_url"
END
WHERE "repo_id" IS NULL;
-- After all values are populated, add the NOT NULL constraint
ALTER TABLE "project" ALTER COLUMN "repo_id" SET NOT NULL;
UPDATE "project"
SET "repo_id" =
CASE
WHEN "git_host" IS NOT NULL THEN lower("git_host") || ':' || coalesce("git_repo_url", '')
ELSE 'unknown:' || coalesce("git_repo_url", '')
END
WHERE "repo_id" IS NULL;
-- After all values are populated, add the NOT NULL constraint
ALTER TABLE "project" ALTER COLUMN "repo_id" SET NOT NULL;
🤖 Prompt for AI Agents
In packages/db/drizzle/0024_free_khan.sql around lines 5 to 15, the UPDATE uses
'unknown:' || "git_repo_url" which yields NULL when git_repo_url is NULL and
will fail the later NOT NULL constraint; change the expression to
COALESCE("git_repo_url", '') and guard against empty strings so the result is
never NULL (e.g., use CASE WHEN COALESCE(NULLIF(TRIM("git_repo_url"), ''), '') =
'' THEN 'unknown' ELSE LOWER(TRIM(COALESCE("git_repo_url", ''))) END and
concatenate with the git_host similarly using COALESCE/LOWER), then run the
ALTER TABLE to set NOT NULL. Ensure you normalize casing (LOWER/TRIM) and/or set
a sentinel like 'unknown:' when repo part is empty.

Comment on lines 6 to 7
"tables": {
"public.waitlist": {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Where are contrib_daily and contrib_totals in this snapshot?

PR objectives say new tables contrib_daily and contrib_totals are added, but this snapshot doesn’t include them. Confirm they land in a subsequent snapshot (e.g., 0023) and that the migration chain creates them in the correct order.

Run to verify presence across the repo:


🏁 Script executed:

#!/bin/bash
rg -nP 'contrib_(daily|totals)\b' packages/db -S

Length of output: 1273


Update 0022 snapshot to include the new tables
packages/db/drizzle/meta/0022_snapshot.json currently only lists public.waitlist—it must be regenerated after migration 0022_faulty_gauntlet.sql so that contrib_daily and contrib_totals appear. Confirm that subsequent snapshots reflect their removal in 0023.

🤖 Prompt for AI Agents
In packages/db/drizzle/meta/0022_snapshot.json around lines 6-7, the snapshot
only lists "public.waitlist" but is missing the tables added by migration
0022_faulty_gauntlet.sql; regenerate the migration snapshot so
0022_snapshot.json includes "contrib_daily" and "contrib_totals" (run your
project's drizzle/db snapshot generation step after applying migration 0022),
then verify that the next snapshot (0023) reflects the removal of those tables
and commit the updated snapshot files.

Comment on lines 2059 to 2178
"public.contrib_rollups": {
"name": "contrib_rollups",
"schema": "",
"columns": {
"user_id": {
"name": "user_id",
"type": "text",
"primaryKey": false,
"notNull": true
},
"period": {
"name": "period",
"type": "contrib_period",
"typeSchema": "public",
"primaryKey": false,
"notNull": true
},
"commits": {
"name": "commits",
"type": "integer",
"primaryKey": false,
"notNull": true,
"default": 0
},
"prs": {
"name": "prs",
"type": "integer",
"primaryKey": false,
"notNull": true,
"default": 0
},
"issues": {
"name": "issues",
"type": "integer",
"primaryKey": false,
"notNull": true,
"default": 0
},
"total": {
"name": "total",
"type": "integer",
"primaryKey": false,
"notNull": true,
"default": 0
},
"fetched_at": {
"name": "fetched_at",
"type": "timestamp with time zone",
"primaryKey": false,
"notNull": true,
"default": "now()"
},
"updated_at": {
"name": "updated_at",
"type": "timestamp with time zone",
"primaryKey": false,
"notNull": true,
"default": "now()"
}
},
"indexes": {
"contrib_rollups_user_period_uidx": {
"name": "contrib_rollups_user_period_uidx",
"columns": [
{
"expression": "user_id",
"isExpression": false,
"asc": true,
"nulls": "last"
},
{
"expression": "period",
"isExpression": false,
"asc": true,
"nulls": "last"
}
],
"isUnique": true,
"concurrently": false,
"method": "btree",
"with": {}
},
"contrib_rollups_period_idx": {
"name": "contrib_rollups_period_idx",
"columns": [
{
"expression": "period",
"isExpression": false,
"asc": true,
"nulls": "last"
}
],
"isUnique": false,
"concurrently": false,
"method": "btree",
"with": {}
},
"contrib_rollups_user_idx": {
"name": "contrib_rollups_user_idx",
"columns": [
{
"expression": "user_id",
"isExpression": false,
"asc": true,
"nulls": "last"
}
],
"isUnique": false,
"concurrently": false,
"method": "btree",
"with": {}
}
},
"foreignKeys": {},
"compositePrimaryKeys": {},
"uniqueConstraints": {},
"policies": {},
"checkConstraints": {},
"isRLSEnabled": false
},
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add FK on contrib_rollups.user_id → public.user.id.

contrib_rollups.user_id is text and currently has no FK. Add a foreign key to ensure referential integrity and cascade deletes.

Example Drizzle schema change (apply in packages/db/src/schema/contributions.ts, then regenerate):

userId: text('user_id')
  .notNull()
  .references(() => user.id, { onDelete: 'cascade' }),
🤖 Prompt for AI Agents
In packages/db/drizzle/meta/0022_snapshot.json around lines 2059–2178 the
contrib_rollups.user_id column lacks a foreign key to the users table; update
the Drizzle schema in packages/db/src/schema/contributions.ts to change the
userId definition to reference the public user id with onDelete: 'cascade'
(e.g., text('user_id').notNull().references(() => user.id, { onDelete: 'cascade'
})), then regenerate the migration/snapshot so the JSON includes the FK
constraint and cascade delete behavior.

logoUrl: null,
gitRepoUrl: 'gitlabhq/gitlabhq',
gitHost: 'github' as const,
repoId: 'gitlabhq/gitlabhq',
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Inconsistent repoId semantics for GitHub entries; use provider IDs, not owner/repo slugs.

These four GitHub seeds set repoId to "owner/repo". Others use the numeric GitHub repository ID. This inconsistency will break lookups keyed by (provider, repoId).

Standardize repoId to the provider’s repository ID (stringified). Replace these slugs with the actual GitHub repository IDs and keep repoId slash-free.

Quick checker to catch regressions:

If you want, I can script a one-off to fetch and patch the correct IDs via the GitHub API.

Also applies to: 685-685, 707-707, 729-729


🏁 Script executed:

#!/bin/bash
# Fail if any GitHub repoId contains a slash
rg -nP 'gitHost:\s*[\'"]github[\'"][^]*?repoId:\s*[\'"][^\'"]*/[^\'"]*[\'"]' packages/db/src/seed/fixtures/projects.ts && {
  echo "Found GitHub repoId slugs (expected numeric IDs)"; exit 1; } || echo "OK"

Length of output: 116


🏁 Script executed:

#!/bin/bash
# Show GitHub entries around gitHost: 'github'
grep -nP "gitHost\s*:\s*['\"]github['\"]" -C3 packages/db/src/seed/fixtures/projects.ts

Length of output: 8497


Standardize GitHub repoId entries to numeric IDs

  • packages/db/src/seed/fixtures/projects.ts:663 – Replace repoId 'gitlabhq/gitlabhq' with its GitHub numeric repository ID (stringified).
  • packages/db/src/seed/fixtures/projects.ts:685 – Replace repoId 'Homebrew/brew' with its GitHub numeric repository ID.
  • packages/db/src/seed/fixtures/projects.ts:707 – Replace repoId 'neovim/neovim' with its GitHub numeric repository ID.
  • packages/db/src/seed/fixtures/projects.ts:729 – Replace repoId 'godotengine/godot' with its GitHub numeric repository ID.

Use the GitHub API or a one-off script to fetch and patch these IDs, ensuring all GitHub seeds use slash-free, stringified repository IDs.

🤖 Prompt for AI Agents
In packages/db/src/seed/fixtures/projects.ts around lines 663, 685, 707, and
729, several repoId fields use "owner/name" GitHub strings; replace each with
the repository's numeric GitHub ID as a string (no slash) by querying the GitHub
API or a one-off script (e.g., GET /repos/:owner/:repo), then update the repoId
values to the returned numeric IDs wrapped as strings to standardize all GitHub
entries.

### Prerequisites

- [Bun](https://bun.sh/) v1.2.13 or higher
- [Bun](https://bun.sh/) v1.2.19 or higher
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Pin Bun version in repo config; tweak phrasing.

README-only bumps drift. Enforce >=1.2.19 via package.json "packageManager" (and/or .bun-version), and reflect that in the bullet.

Apply within this line:

- - [Bun](https://bun.sh/) v1.2.19 or higher
+ - [Bun](https://bun.sh/) v1.2.19+ (ensure package.json "packageManager" pins a compatible Bun)

Verify pins across the monorepo:


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "package.json packageManager pins:"
fd -a package.json | xargs -I{} jq -r '"{}: " + (.packageManager // "—")'

echo -e "\n.bun-version files:"
fd -a '.bun-version' | xargs -I{} sh -c 'echo -n "{}: "; cat "{}"'

python - <<'PY'
import json, subprocess, re
minv=(1,2,19)
def ver_ok(s):
  m=re.search(r'bun@(\d+)\.(\d+)\.(\d+)', s or "")
  return True if not m else tuple(map(int,m.groups())) >= minv
bad=[]
for pj in subprocess.check_output(["fd","-a","package.json"]).decode().splitlines():
  try:
    pm=json.load(open(pj)).get("packageManager","")
  except Exception:
    pm=""
  if not ver_ok(pm): bad.append(f"{pj}: packageManager={pm or '—'} (need >= bun@1.2.19)")
print("\n".join(bad) if bad else "All packageManager pins meet >= bun@1.2.19 (or none set).")
PY

Length of output: 367


Pin Bun version in config and update README
Add a packageManager field in the root package.json (or create a .bun-version file) to pin Bun to >=1.2.19, then update the README bullet to:

- [Bun](https://bun.sh/) v1.2.19+ (pinned via "packageManager" or ".bun-version")
🧰 Tools
🪛 LanguageTool

[grammar] ~75-~75: There might be a mistake here.
Context: ...Bun v1.2.19 or higher - Docker (for lo...

(QB_NEW_EN)

🤖 Prompt for AI Agents
In README.md around line 75, the Bun requirement line should indicate that the
runtime is pinned; add a "packageManager" entry to the repo root package.json
(or create a .bun-version file) and set it to require Bun >=1.2.19, then update
the README bullet to read: "- [Bun](https://bun.sh/) v1.2.19+ (pinned via
\"packageManager\" or \".bun-version\")" so the doc matches the pinned
configuration.

Comment on lines 58 to 62
if (notification.data?.projectId) {
if (notification.type === 'comment_received') {
router.push(`/projects/${notification.data.projectId}#comments`);
} else {
router.push(`/projects/${notification.data.projectId}`);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing return statement in handleClick function causes undefined behavior when navigating to projects with comments.

View Details
📝 Patch Details
diff --git a/apps/web/components/layout/notifications-dropdown.tsx b/apps/web/components/layout/notifications-dropdown.tsx
index 0a8160a..38eb200 100644
--- a/apps/web/components/layout/notifications-dropdown.tsx
+++ b/apps/web/components/layout/notifications-dropdown.tsx
@@ -58,6 +58,7 @@ function NotificationItem({ notification, onMarkAsRead }: NotificationItemProps)
     if (notification.data?.projectId) {
       if (notification.type === 'comment_received') {
         router.push(`/projects/${notification.data.projectId}#comments`);
+        return;
       } else {
         router.push(`/projects/${notification.data.projectId}`);
       }

Analysis

In the handleClick function, there are multiple conditional navigation paths but one branch is missing a return statement. When notification.data?.projectId is truthy and notification.type === 'comment_received', the code executes router.push(\/projects/${notification.data.projectId}#comments`)but doesn't return, causing execution to continue to the next condition. This means for comment notifications, the user would be navigated twice: once to the comments section and immediately again to the general project page, effectively bypassing the intended comment navigation. The fix is to add areturn;` statement after the comment navigation.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 13

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
packages/ui/src/components/select.tsx (1)

67-71: Broken Tailwind arbitrary values: parentheses used instead of brackets (and missing var()).

max-h-(--radix-…) and origin-(--radix-…) won’t compile; Tailwind requires bracket syntax and var(). This breaks menu sizing/animation.

- "… max-h-(--radix-select-content-available-height) … origin-(--radix-select-content-transform-origin) …"
+ "… max-h-[var(--radix-select-content-available-height)] … origin-[var(--radix-select-content-transform-origin)] …"
♻️ Duplicate comments (6)
packages/ui/src/components/select.tsx (1)

40-41: Focus ring now visible — consider adding ring offset and removing height/padding conflict.

Good fix (previous “invisible focus” issue addressed). To improve a11y contrast and avoid vertical sizing conflicts, add a ring offset and drop py-2 since you already set fixed heights via data-[size=*]:h-*.

- "… px-3 py-2 text-sm … focus-visible:ring-2 …"
+ "… px-3 text-sm … focus-visible:ring-2 focus-visible:ring-offset-2 focus-visible:ring-offset-background …"
scripts/lb-seed-all-users.ts (2)

87-87: Drop duplicate SADD; setUserMeta already adds the user to lb:users.

setUserMeta performs redis.sadd('lb:users', userId) internally; this extra SADD just doubles network calls.

-          await redis.sadd(USER_SET, id);

36-97: Good fix: seeds username via setUserMeta and populates membership set.

This addresses prior concerns about provider-handle misattribution and the empty lb:users set.

apps/web/components/layout/notifications-dropdown.tsx (1)

54-64: Add “#comments” anchor for launch comment notifications; confirm provider precedence.

When type === 'comment_received' and only launchId is present, deep-link to the comments anchor. Also, confirm whether projectId should take precedence when both projectId and launchId exist (current code prefers launchId).

Apply this diff to add the anchor:

-    if (notification.data?.launchId) {
-      router.push(`/launches/${notification.data.launchId}`);
+    if (notification.data?.launchId) {
+      router.push(
+        notification.type === 'comment_received'
+          ? `/launches/${notification.data.launchId}#comments`
+          : `/launches/${notification.data.launchId}`,
+      );
       return;
     }

If product wants projectId to win when both exist, use this alternative ordering:

-    if (notification.data?.launchId) {
+    if (notification.data?.projectId) {
+      if (notification.type === 'comment_received') {
+        router.push(`/projects/${notification.data.projectId}#comments`);
+      } else {
+        router.push(`/projects/${notification.data.projectId}`);
+      }
+      return;
+    }
+    if (notification.data?.launchId) {
       router.push(
         notification.type === 'comment_received'
           ? `/launches/${notification.data.launchId}#comments`
           : `/launches/${notification.data.launchId}`,
       );
       return;
-    }
-    if (notification.data?.projectId) {
-      if (notification.type === 'comment_received') {
-        router.push(`/projects/${notification.data.projectId}#comments`);
-      } else {
-        router.push(`/projects/${notification.data.projectId}`);
-      }
-    }
+    }
packages/auth/src/server.ts (1)

48-56: Authenticated GitHub lookup implemented as previously requested.

Using Accept and conditional Authorization looks good.

apps/web/app/api/leaderboard/export/route.ts (1)

7-7: Resolved: correct Zod import path.

Importing from 'zod' fixes the previous runtime break from 'zod/v4'. Good catch.

🧹 Nitpick comments (17)
scripts/lb-seed-all-users.ts (3)

53-74: Remove dead provider-accounts query/mapping; it’s unused and adds DB cost.

The links query and map population (including GH/GL handling) are not used later. Drop them to avoid a full-table scan on account and reduce memory/complexity.

Apply:

-const GH = 'github';
-const GL = 'gitlab';
 const USER_SET = 'lb:users';
-const META = (id: string) => `lb:user:${id}`;
@@
-    const links = await db
-      .select({
-        userId: accountTable.userId,
-        providerId: accountTable.providerId,
-        providerLogin: (accountTable as any).providerLogin,
-      })
-      .from(accountTable)
-      .where(inArray(accountTable.userId, userIds));
-
-    const map = new Map<string, { username?: string; gh?: string; gl?: string }>();
-    for (const id of userIds) {
-      map.set(id, { username: usernameByUserId.get(id) });
-    }
-
-    for (const l of links) {
-      const m = map.get(l.userId)!;
-      const login = asStr((l as any).providerLogin);
-      if (!login) continue;
-
-      if (l.providerId === GH && !m.gh) m.gh = login;
-      if (l.providerId === GL && !m.gl) m.gl = login;
-    }

79-89: Harden batch execution to avoid failing the whole run on a single user.

Use Promise.allSettled (and log failures) so one bad record doesn’t abort the batch. Optionally expose concurrency via env.

-      await Promise.all(
-        chunk.map(async (id) => {
-          const username = usernameByUserId.get(id);
-          await setUserMeta(id, { username }, { seedLeaderboards: false });
-          await redis.sadd(USER_SET, id);
-        }),
-      );
+      const results = await Promise.allSettled(
+        chunk.map(async (id) => {
+          const username = usernameByUserId.get(id);
+          await setUserMeta(id, { username }, { seedLeaderboards: false });
+        }),
+      );
+      const failed = results.filter(r => r.status === 'rejected');
+      if (failed.length) {
+        console.warn(`Batch ${i / BATCH + 1}: ${failed.length} failures`);
+      }

24-26: Minor: allow ssl mode via env override.

Provide an opt-out for non-Neon dev DBs that still require SSL, e.g., LB_DB_SSL=require|disable.

-  const needsSSL = /neon\.tech/i.test(url) || /sslmode=require/i.test(url);
-  const pg = postgres(url, needsSSL ? { ssl: 'require' as const } : {});
+  const sslMode = process.env.LB_DB_SSL ?? ( /neon\.tech/i.test(url) || /sslmode=require/i.test(url) ? 'require' : 'disable' );
+  const pg = postgres(url, sslMode === 'require' ? { ssl: 'require' as const } : {});
packages/api/src/redis/lock.ts (2)

21-29: Consider auto-extending the lock for long-running tasks.

If fn can exceed ttlSec, the lock may expire mid-flight. Provide an opt-in helper to heartbeat EXPIRE.

// new helper (outside the shown range)
export async function withLockExtend<T>(
  key: string,
  ttlSec: number,
  fn: () => Promise<T>,
  extendEveryMs = Math.max(1000, (ttlSec * 1000) / 2),
): Promise<T> {
  const token = await acquireLock(key, ttlSec);
  if (!token) throw new Error('LOCK_CONFLICT');
  const timer = setInterval(() => redis.expire(key, ttlSec).catch(() => {}), extendEveryMs);
  try {
    return await fn();
  } finally {
    clearInterval(timer);
    await releaseLock(key, token);
  }
}

31-33: Nit: yyyymmdd param name is misleading.

Function accepts any ISO day string (e.g., 2025-08-31). Consider renaming to dateIso for clarity.

apps/web/app/api/internal/leaderboard/backfill/route.ts (1)

23-45: Schema includes unused fields (days, concurrency).

Either wire them or drop until needed to avoid API drift.

-    days: z.number().int().min(1).max(365).optional(),
-    concurrency: z.number().int().min(1).max(8).optional(),
apps/web/components/layout/notifications-dropdown.tsx (5)

37-47: Tighten icon typing to the known notification types.

Type the parameter to the union to catch future mismatches at compile time; you still keep a safe default branch.

Apply this diff:

-  const getNotificationIcon = (type: string) => {
+  const getNotificationIcon = (
+    type: NotificationItemProps['notification']['type'],
+  ) => {

11-11: Avoid locale-dependent “less than” string check; compute seconds instead.

Comparing the formatted string is brittle under i18n. Use differenceInSeconds.

Apply this diff:

-import { formatDistanceToNow } from 'date-fns';
+import { formatDistanceToNow, differenceInSeconds } from 'date-fns';
@@
-  const timeAgo = formatDistanceToNow(created, { addSuffix: false });
-  const displayTime = timeAgo.includes('less than') ? 'now' : timeAgo;
+  const secondsAgo = differenceInSeconds(new Date(), created);
+  const displayTime =
+    secondsAgo < 60 ? 'now' : formatDistanceToNow(created, { addSuffix: false });

Also applies to: 67-73


111-119: Keep the bell trigger visible while loading.

Returning null hides notifications entirely in the navbar during initial load. Prefer rendering the trigger and gating only the dropdown content.

Apply this diff:

-  if (isLoading) {
-    return null;
-  }

112-114: Disable “Mark all as read” while the mutation is pending.

Prevents double-submits and gives subtle feedback.

Apply this diff:

-  const { notifications, unreadCount, markAsRead, markAllAsRead, isLoading, hasError } =
-    useNotifications();
+  const {
+    notifications,
+    unreadCount,
+    markAsRead,
+    markAllAsRead,
+    isLoading,
+    hasError,
+    isMarkingAllAsRead,
+  } = useNotifications();
@@
-            {unreadCount > 0 && (
-              <Button variant="ghost" onClick={markAllAsRead} className="gap-2 rounded-none border">
+            {unreadCount > 0 && (
+              <Button
+                variant="ghost"
+                onClick={markAllAsRead}
+                disabled={isMarkingAllAsRead}
+                aria-busy={isMarkingAllAsRead}
+                className="gap-2 rounded-none border"
+              >
                 <CheckCheck className="h-4 w-4" />
                 Mark all as read
               </Button>
             )}

Also applies to: 161-166


49-65: Add a safe fallback route when no IDs exist.

Today the click becomes a no-op if neither launchId nor projectId is present. Consider a default (e.g., /notifications) to avoid dead clicks.

Would you like a follow-up diff once the intended fallback URL is confirmed?

packages/auth/src/server.ts (1)

134-144: Avoid serverless setTimeout-based retry; it likely won’t run after the request completes.

Use a queue/cron or enqueue a retry via Redis/Upstash/QStash; if you must retry here, await the first call and bail on 409 with a logged hint.

Example (await first attempt only):

-        void backfill(body, 'rollups').then(async (res) => {
-          if (res.status === 409) {
-            setTimeout(() => {
-              void backfill(body, 'rollups retry');
-            }, 60_000);
-          }
-        });
+        const res = await backfill(body, 'rollups');
+        if (res.status === 409) {
+          console.info('[auth] backfill already in progress; skipping retry');
+        }
apps/web/app/api/leaderboard/export/route.ts (4)

12-17: Drop unused provider query param (or wire it through).

provider is parsed but never used. Either remove it for now or plumb it through once per-provider pages exist to avoid API confusion.

Apply this diff to remove until supported:

 const Query = z.object({
-  provider: z.enum(['combined', 'github', 'gitlab']).default('combined'),
   window: z.enum(['30d', '365d']).default('30d'),
   limit: z.coerce.number().int().min(1).max(2000).default(500),
   cursor: z.coerce.number().int().min(0).default(0),
 });

29-35: Use page size 100 to match backend cap.

getLeaderboardPage clamps to 100, so asking for 200 adds overhead without benefit.

Apply:

-      limit: Math.min(200, limit - entries.length),
+      limit: Math.min(100, limit - entries.length),

41-44: Nit: de-duplicate userIds before fetching metas.

Saves Redis round-trips in edge cases.

-  const userIds = entries.map((e) => e.userId);
+  const userIds = Array.from(new Set(entries.map((e) => e.userId)));

66-68: Optional: prepend UTF‑8 BOM for better Excel compatibility.

Prevents mojibake in some locales when opening CSV in Excel.

-  const csv = lines.join('\n');
+  const csv = '\uFEFF' + lines.join('\n');
apps/web/components/leaderboard/leaderboard-client.tsx (1)

391-398: CSV export link: consider intent for page scope.

Export currently uses the same limit/cursor as the table. If the intent is “export current page,” this is correct. If you meant “export top N from start,” set cursor=0 in the href.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between b301527 and 1b4c2da.

📒 Files selected for processing (22)
  • apps/web/app/(public)/leaderboard/page.tsx (1 hunks)
  • apps/web/app/api/internal/leaderboard/backfill/route.ts (1 hunks)
  • apps/web/app/api/internal/leaderboard/cron/daily/route.ts (1 hunks)
  • apps/web/app/api/leaderboard/dev-backfill/route.ts (1 hunks)
  • apps/web/app/api/leaderboard/enroll/route.ts (1 hunks)
  • apps/web/app/api/leaderboard/export/route.ts (1 hunks)
  • apps/web/app/api/leaderboard/profiles/route.ts (1 hunks)
  • apps/web/app/api/leaderboard/route.ts (1 hunks)
  • apps/web/components/layout/notifications-dropdown.tsx (1 hunks)
  • apps/web/components/leaderboard/leaderboard-client.tsx (1 hunks)
  • packages/api/package.json (1 hunks)
  • packages/api/src/leaderboard/aggregator.ts (1 hunks)
  • packages/api/src/leaderboard/useMeta.ts (1 hunks)
  • packages/api/src/providers/github.ts (1 hunks)
  • packages/api/src/providers/gitlab.ts (1 hunks)
  • packages/api/src/redis/lock.ts (1 hunks)
  • packages/auth/src/leaderboard-hooks.ts (1 hunks)
  • packages/auth/src/server.ts (2 hunks)
  • packages/ui/src/components/input.tsx (1 hunks)
  • packages/ui/src/components/select.tsx (3 hunks)
  • scripts/lb-backfill-365-all.ts (1 hunks)
  • scripts/lb-seed-all-users.ts (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (14)
  • packages/api/src/leaderboard/useMeta.ts
  • apps/web/app/(public)/leaderboard/page.tsx
  • apps/web/app/api/leaderboard/profiles/route.ts
  • packages/api/src/providers/github.ts
  • packages/api/package.json
  • apps/web/app/api/leaderboard/enroll/route.ts
  • packages/api/src/leaderboard/aggregator.ts
  • apps/web/app/api/internal/leaderboard/cron/daily/route.ts
  • apps/web/app/api/leaderboard/route.ts
  • packages/auth/src/leaderboard-hooks.ts
  • apps/web/app/api/leaderboard/dev-backfill/route.ts
  • packages/api/src/providers/gitlab.ts
  • scripts/lb-backfill-365-all.ts
  • packages/ui/src/components/input.tsx
🧰 Additional context used
🧬 Code graph analysis (7)
apps/web/components/layout/notifications-dropdown.tsx (3)
packages/db/src/schema/notifications.ts (1)
  • notification (11-40)
packages/ui/src/lib/utils.ts (1)
  • cn (4-6)
apps/web/hooks/use-notifications.ts (1)
  • useNotifications (8-101)
packages/auth/src/server.ts (5)
packages/env/src/server.ts (1)
  • env (4-32)
packages/db/src/schema/auth.ts (1)
  • account (39-55)
packages/api/src/leaderboard/meta.ts (1)
  • setUserMeta (11-39)
packages/db/src/index.ts (1)
  • db (16-22)
packages/api/src/leaderboard/useMeta.ts (1)
  • setUserMetaFromProviders (13-42)
apps/web/components/leaderboard/leaderboard-client.tsx (1)
packages/api/src/leaderboard/read.ts (1)
  • LeaderRow (21-21)
apps/web/app/api/internal/leaderboard/backfill/route.ts (7)
packages/auth/src/server.ts (1)
  • auth (20-187)
packages/env/src/verifyCron.ts (1)
  • isCronAuthorized (3-9)
packages/api/src/leaderboard/aggregator.ts (1)
  • refreshUserRollups (65-169)
packages/api/src/leaderboard/redis.ts (1)
  • syncUserLeaderboards (15-39)
packages/db/src/index.ts (1)
  • db (16-22)
packages/api/src/leaderboard/useMeta.ts (1)
  • setUserMetaFromProviders (13-42)
packages/api/src/redis/lock.ts (4)
  • backfillLockKey (35-37)
  • withLock (21-29)
  • acquireLock (3-7)
  • releaseLock (9-19)
scripts/lb-seed-all-users.ts (3)
packages/db/src/index.ts (1)
  • db (16-22)
packages/api/src/leaderboard/meta.ts (1)
  • setUserMeta (11-39)
packages/api/src/redis/client.ts (1)
  • redis (4-7)
packages/api/src/redis/lock.ts (1)
packages/api/src/redis/client.ts (1)
  • redis (4-7)
apps/web/app/api/leaderboard/export/route.ts (3)
packages/api/src/leaderboard/read.ts (1)
  • getLeaderboardPage (82-103)
packages/db/src/index.ts (1)
  • db (16-22)
packages/api/src/leaderboard/useMeta.ts (1)
  • getUserMetas (44-66)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Vade Review
🔇 Additional comments (5)
packages/api/src/redis/lock.ts (1)

9-19: Token-checked unlock via Lua looks solid.

Good atomic compare-and-delete. Errors swallowed appropriately.

apps/web/app/api/internal/leaderboard/backfill/route.ts (3)

105-122: Nice: token-aware two-key lock avoids cross-provider races.

Consistent alphabetical lock ordering removes deadlock risk.


141-149: Good conflict semantics and observability.

LOCK_CONFLICT mapped to 409 with provider context and ref id logging is clear.


69-75: Request: verify env token presence across environments.

If this route is called in dev without provider tokens, it 400s. Confirm intended DX.

Would you like me to add a guard that treats missing tokens as no-op in non-production?

packages/auth/src/server.ts (1)

30-47: Hook wiring and context handling look solid.

Session/newAccount extraction, null-guards, and early-return are clean.

export const dynamic = 'force-dynamic';

import { NextRequest } from 'next/server';
import { z } from 'zod/v4';
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix Zod import path.

zod/v4 is not a published entry; import from 'zod'. This will otherwise fail to build.

-import { z } from 'zod/v4';
+import { z } from 'zod';
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
import { z } from 'zod/v4';
import { z } from 'zod';
🤖 Prompt for AI Agents
In apps/web/app/api/internal/leaderboard/backfill/route.ts around line 5, the
import uses the unpublished path 'zod/v4' which breaks builds; update the import
to use the published package by replacing the module path 'zod/v4' with 'zod' so
the file imports z from the correct package.

import { isCronAuthorized } from '@workspace/env/verify-cron';
import { env } from '@workspace/env/server';

import { backfillLockKey, withLock, acquireLock, releaseLock } from '@workspace/api/redis/locks';
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Incorrect import path: file is redis/lock, not redis/locks.

This will break resolution.

-import { backfillLockKey, withLock, acquireLock, releaseLock } from '@workspace/api/redis/locks';
+import { backfillLockKey, withLock, acquireLock, releaseLock } from '@workspace/api/redis/lock';
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
import { backfillLockKey, withLock, acquireLock, releaseLock } from '@workspace/api/redis/locks';
// apps/web/app/api/internal/leaderboard/backfill/route.ts
import { backfillLockKey, withLock, acquireLock, releaseLock } from '@workspace/api/redis/lock';
🤖 Prompt for AI Agents
In apps/web/app/api/internal/leaderboard/backfill/route.ts around line 10 the
import path is incorrect: it imports from '@workspace/api/redis/locks' but the
module lives at '@workspace/api/redis/lock'; update the import statement to use
the correct path ('@workspace/api/redis/lock') so the named exports
backfillLockKey, withLock, acquireLock, releaseLock resolve correctly and the
module can be loaded.

Comment on lines +79 to +97
async function runOnce() {
const wrote = await refreshUserRollups(
{ db },
{
userId: body.userId,
githubLogin: body.githubLogin,
gitlabUsername: body.gitlabUsername,
githubToken,
gitlabToken,
gitlabBaseUrl,
},
);

await syncUserLeaderboards(db, body.userId);

await setUserMetaFromProviders(body.userId, body.githubLogin, body.gitlabUsername);

return wrote;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Two-provider flow only runs the aggregation once.

When both providers are present, runOnce is invoked a single time, so one provider never backfills. Run each provider sequentially under the two-key lock.

-  async function runOnce() {
-    const wrote = await refreshUserRollups(
+  async function runOnce(p: 'github' | 'gitlab') {
+    const wrote = await refreshUserRollups(
       { db },
       {
         userId: body.userId,
-        githubLogin: body.githubLogin,
-        gitlabUsername: body.gitlabUsername,
-        githubToken,
-        gitlabToken,
+        githubLogin: p === 'github' ? body.githubLogin : undefined,
+        gitlabUsername: p === 'gitlab' ? body.gitlabUsername : undefined,
+        githubToken: p === 'github' ? githubToken : undefined,
+        gitlabToken: p === 'gitlab' ? gitlabToken : undefined,
         gitlabBaseUrl,
       },
     );
 
-    await syncUserLeaderboards(db, body.userId);
-
-    await setUserMetaFromProviders(body.userId, body.githubLogin, body.gitlabUsername);
+    await syncUserLeaderboards(db, body.userId);
+    await setUserMetaFromProviders(body.userId, body.githubLogin, body.gitlabUsername);
 
     return wrote;
   }
@@
-      return await withLock(k1, ttlSec, async () => {
-        const t2 = await acquireLock(k2, ttlSec);
+      return await withLock(k1, ttlSec, async () => {
+        const t2 = await acquireLock(k2, ttlSec);
         if (!t2) throw new Error(`LOCK_CONFLICT:${p2}`);
         try {
-          const out = await runOnce();
+          const out1 = await runOnce(p1);
+          const out2 = await runOnce(p2);
           return Response.json({
             ok: true,
             userId: body.userId,
             providers,
             mode: 'rollups',
             snapshotDate: todayStr,
-            wrote: out.wrote,
-            providerUsed: out.provider,
+            wrote: out2.wrote, // preserve existing shape
+            providerUsed: 'both',
+            wroteByProvider: { [p1]: out1.wrote, [p2]: out2.wrote }, // additive field
           });
         } finally {
           await releaseLock(k2, t2);
         }
       });

Alternative (if multi-provider is not supported by design): enforce exclusivity at validation time and error when both are provided.

Also applies to: 99-122

import { z } from 'zod';

import { getLeaderboardPage } from '@workspace/api/leaderboard/read';
import { getUserMetas } from '@workspace/api/leaderboard/use-meta';
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Possible broken import on case-sensitive filesystems (useMeta vs use-meta).

Provided context shows the file as useMeta.ts. On Linux, importing '@workspace/api/leaderboard/use-meta' will 404. If the file is indeed useMeta.ts, update the path.

Apply this diff:

-import { getUserMetas } from '@workspace/api/leaderboard/use-meta';
+import { getUserMetas } from '@workspace/api/leaderboard/useMeta';

Run to verify the actual filename and export:


🏁 Script executed:

#!/bin/bash
# List potential files
fd -HI -t f 'use*meta*.ts' packages/api/src/leaderboard

# Confirm exported symbol
rg -nP '\bexport\s+async\s+function\s+getUserMetas\b' packages/api/src/leaderboard -C2

Length of output: 569


Fix case-sensitive import path

In apps/web/app/api/leaderboard/export/route.ts (line 10), update the import to match the actual filename:

-import { getUserMetas } from '@workspace/api/leaderboard/use-meta';
+import { getUserMetas } from '@workspace/api/leaderboard/useMeta';
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
import { getUserMetas } from '@workspace/api/leaderboard/use-meta';
import { getUserMetas } from '@workspace/api/leaderboard/useMeta';
🤖 Prompt for AI Agents
In apps/web/app/api/leaderboard/export/route.ts around line 10, the import path
uses wrong letter casing ('@workspace/api/leaderboard/use-meta'); update the
import to match the actual filename on disk exactly (e.g.
'@workspace/api/leaderboard/useMeta' or whatever the file is named with exact
case), so the module resolves correctly across case-sensitive filesystems.

Comment on lines +52 to +61
const row = [
rank,
e.userId,
m?.username ?? '',
m?.githubLogin ?? '',
m?.gitlabUsername ?? '',
e.score,
]
.map((v) => (typeof v === 'string' ? `"${v.replace(/"/g, '""')}"` : String(v)))
.join(',');
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Mitigate CSV injection (Excel formula execution).

User-controlled strings starting with =, +, -, or @ can execute formulas when opened in Excel. Prefix such cells with a single quote before quoting.

Apply:

-    ]
-      .map((v) => (typeof v === 'string' ? `"${v.replace(/"/g, '""')}"` : String(v)))
+    ].map((v) => csvCell(v))
       .join(',');

Add this helper near the top of the file (after imports):

function csvCell(v: unknown): string {
  if (typeof v !== 'string') return String(v);
  const s = v.replace(/"/g, '""');
  const needsGuard = /^[=+\-@]/.test(s.trimStart());
  return `"${needsGuard ? "'" + s : s}"`;
}
🤖 Prompt for AI Agents
In apps/web/app/api/leaderboard/export/route.ts around lines 52 to 61, the CSV
cell building currently quotes strings but does not guard against CSV/Excel
injection (cells starting with =, +, -, or @). Add a csvCell helper function
near the top of the file (after imports) that escapes double quotes, detects
leading =,+,-,@ after trimming, and prefixes a single quote when needed, then
replace the current .map(...) with a call to csvCell for each value so all
string cells are safely quoted and guarded against formula execution.

Comment on lines +10 to +13
import { setUserMetaFromProviders } from '@workspace/api/leaderboard/use-meta';
import { setUserMeta } from '@workspace/api/leaderboard/meta';
import { createAuthMiddleware } from 'better-auth/api';
import { eq } from 'drizzle-orm';
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Verify import path casing: use-meta vs useMeta.

On case-sensitive filesystems this may fail unless there’s a barrel re-export.

Run:


🏁 Script executed:

#!/bin/bash
set -euo pipefail
echo "Where is setUserMetaFromProviders defined?"
rg -nP 'export\s+async\s+function\s+setUserMetaFromProviders' -C2 packages/api || true

echo "Barrel re-exports referencing useMeta/use-meta:"
fd -a index.ts packages/api/src/leaderboard | while read -r f; do
  echo "---- $f ----"
  rg -n "useMeta|use-meta|setUserMetaFromProviders" -n "$f" || true
done

Length of output: 728


Change import to match ‘useMeta.ts’
Import path '@workspace/api/leaderboard/use-meta' doesn’t exist on case-sensitive filesystems—the file is named useMeta.ts. Update the import to '@workspace/api/leaderboard/useMeta' or add a barrel export in packages/api/src/leaderboard/index.ts.

🤖 Prompt for AI Agents
In packages/auth/src/server.ts around lines 10 to 13, the import path
'@workspace/api/leaderboard/use-meta' is incorrect on case-sensitive filesystems
because the module file is named useMeta.ts; update the import to
'@workspace/api/leaderboard/useMeta' (matching the file name exactly) or
alternatively add a barrel export in packages/api/src/leaderboard/index.ts that
re-exports from './useMeta' and then import from the barrel; ensure the import
string case exactly matches the source filename.

Comment on lines +15 to +19
const ORIGIN =
env.VERCEL_ENV === 'production'
? `https://${env.VERCEL_PROJECT_PRODUCTION_URL}`
: `http://${env.VERCEL_PROJECT_PRODUCTION_URL || 'localhost:3000'}`;

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix ORIGIN for preview/dev; current logic points preview/dev to production over HTTP.

VERCEL_PROJECT_PRODUCTION_URL is always set, so non-production resolves to "http://", which will fail and/or hit the wrong env. Use VERCEL_URL when available; otherwise fall back to localhost.

Apply:

-const ORIGIN =
-  env.VERCEL_ENV === 'production'
-    ? `https://${env.VERCEL_PROJECT_PRODUCTION_URL}`
-    : `http://${env.VERCEL_PROJECT_PRODUCTION_URL || 'localhost:3000'}`;
+const ORIGIN =
+  env.VERCEL_ENV === 'production'
+    ? `https://${env.VERCEL_PROJECT_PRODUCTION_URL}`
+    : process.env.VERCEL_URL
+      ? `https://${process.env.VERCEL_URL}`
+      : 'http://localhost:3000';
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const ORIGIN =
env.VERCEL_ENV === 'production'
? `https://${env.VERCEL_PROJECT_PRODUCTION_URL}`
: `http://${env.VERCEL_PROJECT_PRODUCTION_URL || 'localhost:3000'}`;
const ORIGIN =
env.VERCEL_ENV === 'production'
? `https://${env.VERCEL_PROJECT_PRODUCTION_URL}`
: process.env.VERCEL_URL
? `https://${process.env.VERCEL_URL}`
: 'http://localhost:3000';
🤖 Prompt for AI Agents
packages/auth/src/server.ts around lines 15 to 19: the ORIGIN logic uses
VERCEL_PROJECT_PRODUCTION_URL for non-production which points preview/dev to the
production domain over HTTP; update the logic to use
VERCEL_PROJECT_PRODUCTION_URL only for production (with https), use VERCEL_URL
for preview/deploys (with https), and fall back to localhost:3000 (http) when
VERCEL_URL is missing.

Comment on lines +48 to +63
async function githubIdToLogin(id: string): Promise<string | undefined> {
try {
const res = await fetch(`https://api.github.com/user/${id}`, {
headers: {
'User-Agent': 'ossdotnow',
Accept: 'application/vnd.github+json',
...(env.GITHUB_TOKEN ? { Authorization: `Bearer ${env.GITHUB_TOKEN}` } : {}),
},
});
if (!res.ok) return undefined;
const j = await res.json().catch(() => null);
return (j && typeof j.login === 'string' && j.login) || undefined;
} catch {
return undefined;
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Harden GitHub lookup: add timeout, API version header, and prefer session access token.

Prevents hangs and aligns with GitHub API best practices.

Apply:

-      async function githubIdToLogin(id: string): Promise<string | undefined> {
-        try {
-          const res = await fetch(`https://api.github.com/user/${id}`, {
-            headers: {
-              'User-Agent': 'ossdotnow',
-              Accept: 'application/vnd.github+json',
-              ...(env.GITHUB_TOKEN ? { Authorization: `Bearer ${env.GITHUB_TOKEN}` } : {}),
-            },
-          });
+      async function githubIdToLogin(id: string): Promise<string | undefined> {
+        const controller = new AbortController();
+        const timeout = setTimeout(() => controller.abort(), 5_000);
+        try {
+          const res = await fetch(`https://api.github.com/user/${id}`, {
+            headers: {
+              'User-Agent': 'ossdotnow',
+              Accept: 'application/vnd.github+json',
+              'X-GitHub-Api-Version': '2022-11-28',
+              ...(newAccount?.accessToken
+                ? { Authorization: `Bearer ${newAccount.accessToken}` }
+                : env.GITHUB_TOKEN
+                ? { Authorization: `Bearer ${env.GITHUB_TOKEN}` }
+                : {}),
+            },
+            signal: controller.signal,
+          });
           if (!res.ok) return undefined;
           const j = await res.json().catch(() => null);
           return (j && typeof j.login === 'string' && j.login) || undefined;
-        } catch {
+        } catch {
           return undefined;
-        }
+        } finally {
+          clearTimeout(timeout);
+        }
       }

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In packages/auth/src/server.ts around lines 48 to 63, the GitHub user lookup can
hang, lacks the API version header, and should prefer a per-session access token
over the global env token; update the function to: use an AbortController to
enforce a short timeout (e.g. 3–5s) and abort the fetch on timeout, include the
X-GitHub-Api-Version header (e.g. "2022-11-28") alongside existing Accept and
User-Agent headers, and choose Authorization from a provided sessionAccessToken
when available falling back to env.GITHUB_TOKEN; preserve existing error
handling and ensure the AbortController is cleaned up.

Comment on lines +114 to +121
function backfill(body: unknown, label: string) {
return fetch(`${ORIGIN}/api/internal/leaderboard/backfill`, {
method: 'POST',
headers: {
'content-type': 'application/json',
authorization: `Bearer ${env.CRON_SECRET}`,
},
body: JSON.stringify(body),
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Backfill call: set proper header casing and add timeout to avoid hanging requests.

Header names are case-insensitive but many frameworks read "Authorization"; add abort to bound latency.

Apply:

-      function backfill(body: unknown, label: string) {
-        return fetch(`${ORIGIN}/api/internal/leaderboard/backfill`, {
+      function backfill(body: unknown, label: string) {
+        const controller = new AbortController();
+        const timeout = setTimeout(() => controller.abort(), 10_000);
+        return fetch(`${ORIGIN}/api/internal/leaderboard/backfill`, {
           method: 'POST',
           headers: {
             'content-type': 'application/json',
-            authorization: `Bearer ${env.CRON_SECRET}`,
+            Authorization: `Bearer ${env.CRON_SECRET}`,
           },
           body: JSON.stringify(body),
-        })
+          signal: controller.signal,
+        })
           .then(async (r) => {
             const text = await r.text().catch(() => '');
             console.log(`[auth] backfill ${label} ->`, r.status, text.slice(0, 200));
             return { ok: r.ok, status: r.status };
           })
           .catch((e) => {
             console.warn(`[auth] backfill ${label} fetch failed:`, e);
             return { ok: false, status: 0 };
-          });
+          })
+          .finally(() => clearTimeout(timeout));
       }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
function backfill(body: unknown, label: string) {
return fetch(`${ORIGIN}/api/internal/leaderboard/backfill`, {
method: 'POST',
headers: {
'content-type': 'application/json',
authorization: `Bearer ${env.CRON_SECRET}`,
},
body: JSON.stringify(body),
function backfill(body: unknown, label: string) {
const controller = new AbortController();
const timeout = setTimeout(() => controller.abort(), 10_000);
return fetch(`${ORIGIN}/api/internal/leaderboard/backfill`, {
method: 'POST',
headers: {
'content-type': 'application/json',
Authorization: `Bearer ${env.CRONON_SECRET}`,
},
body: JSON.stringify(body),
signal: controller.signal,
})
.then(async (r) => {
const text = await r.text().catch(() => '');
console.log(`[auth] backfill ${label} ->`, r.status, text.slice(0, 200));
return { ok: r.ok, status: r.status };
})
.catch((e) => {
console.warn(`[auth] backfill ${label} fetch failed:`, e);
return { ok: false, status: 0 };
})
.finally(() => clearTimeout(timeout));
}
🤖 Prompt for AI Agents
In packages/auth/src/server.ts around lines 114 to 121, the backfill fetch uses
lower-case header keys and no timeout; change the headers to use proper casing
("Content-Type" and "Authorization") and add an AbortController with a timeout
(e.g., 5–10s) so the request is aborted on slow/hung calls, pass
controller.signal to fetch and clear the timeout after completion (and handle
the abort error accordingly).

Comment on lines 112 to 115
className={cn(
"focus:bg-accent focus:text-accent-foreground [&_svg:not([class*='text-'])]:text-muted-foreground relative flex w-full cursor-default items-center gap-2 rounded-sm py-1.5 pr-8 pl-2 text-sm outline-hidden select-none data-[disabled]:pointer-events-none data-[disabled]:opacity-50 [&_svg]:pointer-events-none [&_svg]:shrink-0 [&_svg:not([class*='size-'])]:size-4 *:[span]:last:flex *:[span]:last:items-center *:[span]:last:gap-2",
"focus:bg-accent focus:text-accent-foreground [&_svg:not([class*='text-'])]:text-muted-foreground relative flex w-full cursor-default items-center gap-2 rounded-none py-1.5 pr-8 pl-2 text-sm outline-hidden select-none data-[disabled]:pointer-events-none data-[disabled]:opacity-50 [&_svg]:pointer-events-none [&_svg]:shrink-0 [&_svg:not([class*='size-'])]:size-4 *:[span]:last:flex *:[span]:last:items-center *:[span]:last:gap-2",
className
)}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

outline-hidden isn’t a Tailwind utility; replace with outline-none.

This class is ignored and adds noise.

- "… outline-hidden select-none …"
+ "… outline-none select-none …"

Run to find other occurrences:


🏁 Script executed:

#!/bin/bash
rg -n -C2 'outline-hidden' -g '!**/node_modules/**'

Length of output: 10873


Replace all outline-hidden classes with outline-none
outline-hidden isn’t a valid Tailwind utility (it’s ignored and only adds noise). Update in select.tsx (L112) and every other occurrence (e.g. sidebar.tsx, popover.tsx, dialog.tsx, command.tsx, dropdown-menu.tsx, sheet.tsx) to use outline-none.
You can locate them via:

rg -l 'outline-hidden' -g '!**/node_modules/**'
🤖 Prompt for AI Agents
In packages/ui/src/components/select.tsx around lines 112 to 115, the class
string contains the invalid Tailwind utility "outline-hidden"; replace it with
"outline-none" and update the className concatenation accordingly. Also search
the repo for all other occurrences (e.g. sidebar.tsx, popover.tsx, dialog.tsx,
command.tsx, dropdown-menu.tsx, sheet.tsx) and replace every "outline-hidden"
with "outline-none" (you can use rg -l 'outline-hidden' -g '!**/node_modules/**'
to locate files), then run your build/typecheck to ensure no remaining
references.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (6)
apps/web/app/api/leaderboard/export/route.ts (2)

7-7: Fix invalid Zod import path.

zod/v4 is not a published entrypoint and will 404 at runtime; import from 'zod'.

-import { z } from 'zod/v4';
+import { z } from 'zod';

52-61: Guard against CSV/Excel formula injection.

Prefix cells starting with =,+,-,@ to prevent formula execution when opened in Excel.

+function csvCell(v: unknown): string {
+  if (typeof v !== 'string') return String(v);
+  const s = v.replace(/"/g, '""');
+  const needsGuard = /^[=+\-@]/.test(s.trimStart());
+  return `"${needsGuard ? "'" + s : s}"`;
+}
+
 const Query = z.object({
-    ]
-      .map((v) => (typeof v === 'string' ? `"${v.replace(/"/g, '""')}"` : String(v)))
+    ].map((v) => csvCell(v))
       .join(',');

Also applies to: 12-12

packages/api/src/providers/github.ts (4)

3-3: Fix incorrect Zod import (regression).

This was flagged earlier and marked "addressed," but the file still imports from the subpath which breaks in many envs (Bun, some bundlers).

-import { z } from 'zod/v4';
+import { z } from 'zod';

Run to catch any other occurrences:

#!/bin/bash
rg -nP -C2 "\bzod/v4\b"

118-124: Query by-repository contributions and filter to public repos.

This enables parity with GitLab by excluding private repositories from totals.

         cwin: contributionsCollection(from: $from, to: $to) {
           contributionCalendar { totalContributions }
           totalCommitContributions
           totalPullRequestContributions
           totalIssueContributions
+          commitContributionsByRepository(maxRepositories: 100) {
+            repository { isPrivate }
+            contributions { totalCount }
+          }
+          pullRequestContributionsByRepository(maxRepositories: 100) {
+            repository { isPrivate }
+            contributions { totalCount }
+          }
+          issueContributionsByRepository(maxRepositories: 100) {
+            repository { isPrivate }
+            contributions { totalCount }
+          }
         }

164-175: Apply public-only logic to rollup windows (30d/365d) as well.

Mirror the by-repo fields in both windows so totals exclude private repos consistently.

         c30: contributionsCollection(from: $from30, to: $to) {
           contributionCalendar { totalContributions }
           totalCommitContributions
           totalPullRequestContributions
           totalIssueContributions
+          commitContributionsByRepository(maxRepositories: 100) {
+            repository { isPrivate }
+            contributions { totalCount }
+          }
+          pullRequestContributionsByRepository(maxRepositories: 100) {
+            repository { isPrivate }
+            contributions { totalCount }
+          }
+          issueContributionsByRepository(maxRepositories: 100) {
+            repository { isPrivate }
+            contributions { totalCount }
+          }
         }
         c365: contributionsCollection(from: $from365, to: $to) {
           contributionCalendar { totalContributions }
           totalCommitContributions
           totalPullRequestContributions
           totalIssueContributions
+          commitContributionsByRepository(maxRepositories: 100) {
+            repository { isPrivate }
+            contributions { totalCount }
+          }
+          pullRequestContributionsByRepository(maxRepositories: 100) {
+            repository { isPrivate }
+            contributions { totalCount }
+          }
+          issueContributionsByRepository(maxRepositories: 100) {
+            repository { isPrivate }
+            contributions { totalCount }
+          }
         }

190-196: Use public-only sums in rollup pick().

Ensures both periods reflect the same privacy rule.

-  const pick = (w?: z.infer<typeof WindowSchema>): GithubContributionTotals => ({
-    login: userLogin,
-    commits: w?.totalCommitContributions ?? 0,
-    prs: w?.totalPullRequestContributions ?? 0,
-    issues: w?.totalIssueContributions ?? 0,
-    rateLimit: rl,
-  });
+  const pick = (w?: z.infer<typeof WindowSchema>): GithubContributionTotals => {
+    const sumPublic = (
+      arr?: { repository: { isPrivate: boolean }; contributions: { totalCount: number } }[],
+    ) => arr?.reduce((acc, r) => acc + (r.repository.isPrivate ? 0 : r.contributions.totalCount), 0);
+    return {
+      login: userLogin,
+      commits: (sumPublic(w?.commitContributionsByRepository) ?? w?.totalCommitContributions) ?? 0,
+      prs: (sumPublic(w?.pullRequestContributionsByRepository) ?? w?.totalPullRequestContributions) ?? 0,
+      issues: (sumPublic(w?.issueContributionsByRepository) ?? w?.totalIssueContributions) ?? 0,
+      rateLimit: rl,
+    };
+  };
🧹 Nitpick comments (8)
packages/api/src/leaderboard/use-meta.ts (3)

28-36: URL-encode provider handles in avatar URLs.

Avoid malformed URLs if a handle ever includes unexpected characters. Safer to wrap with encodeURIComponent.

-      updates.avatarUrl = `https://github.com/${gh}.png?size=80`;
+      updates.avatarUrl = `https://github.com/${encodeURIComponent(gh)}.png?size=80`;
@@
-      updates.avatarUrl = `https://gitlab.com/${gl}.png?width=80`;
+      updates.avatarUrl = `https://gitlab.com/${encodeURIComponent(gl)}.png?width=80`;
@@
-      (m.githubLogin ? `https://github.com/${m.githubLogin}.png?size=80` : undefined) ||
-      (m.gitlabUsername ? `https://gitlab.com/${m.gitlabUsername}.png?width=80` : undefined);
+      (m.githubLogin ? `https://github.com/${encodeURIComponent(m.githubLogin)}.png?size=80` : undefined) ||
+      (m.gitlabUsername ? `https://gitlab.com/${encodeURIComponent(m.gitlabUsername)}.png?width=80` : undefined);

Also applies to: 53-56


24-37: Avoid redundant writes when values are unchanged.

Minor write-amplification: only hset provider fields if they differ from existing.

   if (githubLogin && githubLogin.trim()) {
     const gh = githubLogin.trim();
-    updates.githubLogin = gh;
+    if (existing.githubLogin !== gh) updates.githubLogin = gh;
@@
   if (gitlabUsername && gitlabUsername.trim()) {
     const gl = gitlabUsername.trim();
-    updates.gitlabUsername = gl;
+    if (existing.gitlabUsername !== gl) updates.gitlabUsername = gl;

45-47: Deduplicate userIds before pipelining to cut round-trips.

Large exports could repeat the same user; de-duping keeps pipeline lean without changing output order.

 export async function getUserMetas(userIds: string[]): Promise<UserMeta[]> {
-  const pipe = redis.pipeline();
-  for (const id of userIds) pipe.hgetall(metaKey(id));
-  const rows = await pipe.exec();
-
-  return rows.map((raw, i) => {
-    const id = userIds[i]!;
-    const m = (raw || {}) as Record<string, string | undefined>;
+  if (userIds.length === 0) return [];
+  const uniqueIds = Array.from(new Set(userIds));
+  const pipe = redis.pipeline();
+  for (const id of uniqueIds) pipe.hgetall(metaKey(id));
+  const rows = await pipe.exec();
+
+  const metaById = new Map<string, Record<string, string | undefined>>();
+  rows.forEach((raw, i) => {
+    metaById.set(uniqueIds[i]!, (raw || {}) as Record<string, string | undefined>);
+  });
+
+  return userIds.map((id) => {
+    const m = metaById.get(id) ?? {};
     const username = m.username || m.githubLogin || m.gitlabUsername || id.slice(0, 8);
     const avatarUrl =
       m.avatarUrl ||
       (m.githubLogin ? `https://github.com/${encodeURIComponent(m.githubLogin)}.png?size=80` : undefined) ||
       (m.gitlabUsername ? `https://gitlab.com/${encodeURIComponent(m.gitlabUsername)}.png?width=80` : undefined);
 
-    return {
-      userId: id,
-      username,
-      avatarUrl,
-      githubLogin: m.githubLogin,
-      gitlabUsername: m.gitlabUsername,
-    };
-  });
+    return { userId: id, username, avatarUrl, githubLogin: m.githubLogin, gitlabUsername: m.gitlabUsername };
+  });
 }

Also applies to: 49-65

apps/web/app/api/leaderboard/export/route.ts (2)

13-14: Drop unused provider query param (or implement filtering).

Param is parsed but not used; simplify until per-provider exports land.

-  provider: z.enum(['combined', 'github', 'gitlab']).default('combined'),

32-32: Align page size with reader’s cap (100).

getLeaderboardPage clamps to 100; request 100 to avoid extra loops.

-      limit: Math.min(200, limit - entries.length),
+      limit: Math.min(100, limit - entries.length),
packages/api/src/providers/github.ts (1)

83-86: Consider sending a User-Agent.

GitHub recommends a User-Agent for diagnostics/rate-limit attribution. Non-blocking.

scripts/github-test.ts (2)

34-40: Parse bare YYYY-MM-DD as UTC to avoid TZ drift.

Without this, dates parse as local time and shift when serialized to ISO, producing incorrect GitHub ranges.

 function toDateOrThrow(s: string, label: string): Date {
-  const d = new Date(s);
+  // Interpret bare YYYY-MM-DD as UTC midnight to avoid TZ drift.
+  const m = s.match(/^(\d{4})-(\d{2})-(\d{2})$/);
+  const d = m
+    ? new Date(Date.UTC(Number(m[1]), Number(m[2]) - 1, Number(m[3])))
+    : new Date(s);
   if (Number.isNaN(d.getTime())) {
     throw new Error(`Invalid ${label} date: ${JSON.stringify(s)}`);
   }
   return d;
 }

82-86: Allow GH_TOKEN as an alternative env var (optional).

Common CI setups export GH_TOKEN; supporting both improves ergonomics.

If desired:

const token = args.token || process.env.GITHUB_TOKEN || process.env.GH_TOKEN;
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 1b4c2da and d9a9c57.

📒 Files selected for processing (10)
  • apps/web/app/api/leaderboard/dev-backfill/route.ts (1 hunks)
  • apps/web/app/api/leaderboard/export/route.ts (1 hunks)
  • apps/web/app/api/leaderboard/profiles/route.ts (1 hunks)
  • apps/web/components/comments/reply-form.tsx (1 hunks)
  • packages/api/package.json (1 hunks)
  • packages/api/src/leaderboard/use-meta.ts (1 hunks)
  • packages/api/src/providers/github.ts (1 hunks)
  • packages/api/src/providers/gitlab.ts (1 hunks)
  • packages/api/src/routers/profile.ts (2 hunks)
  • scripts/github-test.ts (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (6)
  • packages/api/src/providers/gitlab.ts
  • apps/web/app/api/leaderboard/profiles/route.ts
  • packages/api/package.json
  • apps/web/components/comments/reply-form.tsx
  • packages/api/src/routers/profile.ts
  • apps/web/app/api/leaderboard/dev-backfill/route.ts
🧰 Additional context used
🧬 Code graph analysis (4)
packages/api/src/leaderboard/use-meta.ts (1)
packages/api/src/redis/client.ts (1)
  • redis (4-7)
packages/api/src/providers/github.ts (1)
packages/api/src/providers/gitlab.ts (2)
  • DateLike (3-3)
  • DateRange (4-4)
scripts/github-test.ts (1)
packages/api/src/providers/github.ts (1)
  • getGithubContributionTotals (108-143)
apps/web/app/api/leaderboard/export/route.ts (4)
apps/web/app/api/leaderboard/route.ts (1)
  • GET (18-46)
packages/api/src/leaderboard/read.ts (1)
  • getLeaderboardPage (82-103)
packages/db/src/index.ts (1)
  • db (16-22)
packages/api/src/leaderboard/use-meta.ts (1)
  • getUserMetas (44-66)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Vade Review
🔇 Additional comments (2)
packages/api/src/providers/github.ts (2)

206-217: Day window semantics are correct.

Inclusive start at 00:00:00Z and exclusive end at next UTC midnight avoids off-by-ms issues and matches provider parity.


28-37: Confirm by-repo aggregates and public-only intent
Extend the WindowSchema in packages/api/src/providers/github.ts with optional arrays for commitContributionsByRepository, pullRequestContributionsByRepository, and issueContributionsByRepository (per the suggested diff). Verify that the GitHub GraphQL API actually supports per-repository contribution breakdowns and confirm that product requirements call for counting only public contributions across all providers.

Comment on lines +70 to +88
async function githubGraphQLRequest<T>({
token,
query,
variables,
}: {
token: string;
query: string;
variables: Record<string, unknown>;
}): Promise<T> {
if (!token) throw new Error('GitHub GraphQL token is required. Pass GITHUB_TOKEN.');

const res = await fetch(GITHUB_GQL_ENDPOINT, {
method: 'POST',
headers: {
Authorization: `bearer ${token}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({ query, variables }),
});
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add request timeout + abort to prevent hung GraphQL calls.

External calls lack a timeout; a stalled connection will block workers and cron runs.

 async function githubGraphQLRequest<T>({
   token,
   query,
   variables,
+  timeoutMs = 15_000,
 }: {
   token: string;
   query: string;
   variables: Record<string, unknown>;
+  timeoutMs?: number;
 }): Promise<T> {
   if (!token) throw new Error('GitHub GraphQL token is required. Pass GITHUB_TOKEN.');
 
-  const res = await fetch(GITHUB_GQL_ENDPOINT, {
-    method: 'POST',
-    headers: {
-      Authorization: `bearer ${token}`,
-      'Content-Type': 'application/json',
-    },
-    body: JSON.stringify({ query, variables }),
-  });
+  const ac = new AbortController();
+  const timer = setTimeout(() => ac.abort(), timeoutMs);
+  let res: any;
+  try {
+    res = await fetch(GITHUB_GQL_ENDPOINT, {
+      method: 'POST',
+      headers: {
+        Authorization: `bearer ${token}`,
+        'Content-Type': 'application/json',
+      },
+      body: JSON.stringify({ query, variables }),
+      signal: ac.signal,
+    });
+  } finally {
+    clearTimeout(timer);
+  }
🤖 Prompt for AI Agents
In packages/api/src/providers/github.ts around lines 70 to 88, the GraphQL fetch
has no timeout which can hang workers; add an AbortController with a
configurable timeout (e.g., 10s or from env), pass controller.signal into fetch,
set a setTimeout to controller.abort() after the timeout, and clear that timeout
on success; catch the abort error and rethrow a clear Error (e.g., 'GitHub
GraphQL request timed out') so callers can handle it, and ensure any resources
(timeout) are cleaned up in finally.

Comment on lines +135 to +142
const w = data.user?.cwin;
return {
login: data.user?.login ?? login,
commits: w?.totalCommitContributions ?? 0,
prs: w?.totalPullRequestContributions ?? 0,
issues: w?.totalIssueContributions ?? 0,
rateLimit: data.rateLimit ? { ...data.rateLimit } : undefined,
};
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Compute totals from public repositories when available.

Fallback to GitHub’s window totals if the by-repo arrays are absent.

-  const w = data.user?.cwin;
+  const w = data.user?.cwin;
+  const sumPublic = (
+    arr?: { repository: { isPrivate: boolean }; contributions: { totalCount: number } }[],
+  ) => arr?.reduce((acc, r) => acc + (r.repository.isPrivate ? 0 : r.contributions.totalCount), 0);
   return {
     login: data.user?.login ?? login,
-    commits: w?.totalCommitContributions ?? 0,
-    prs: w?.totalPullRequestContributions ?? 0,
-    issues: w?.totalIssueContributions ?? 0,
+    commits: (sumPublic(w?.commitContributionsByRepository) ?? w?.totalCommitContributions) ?? 0,
+    prs: (sumPublic(w?.pullRequestContributionsByRepository) ?? w?.totalPullRequestContributions) ?? 0,
+    issues: (sumPublic(w?.issueContributionsByRepository) ?? w?.totalIssueContributions) ?? 0,
     rateLimit: data.rateLimit ? { ...data.rateLimit } : undefined,
   };
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const w = data.user?.cwin;
return {
login: data.user?.login ?? login,
commits: w?.totalCommitContributions ?? 0,
prs: w?.totalPullRequestContributions ?? 0,
issues: w?.totalIssueContributions ?? 0,
rateLimit: data.rateLimit ? { ...data.rateLimit } : undefined,
};
const w = data.user?.cwin;
const sumPublic = (
arr?: { repository: { isPrivate: boolean }; contributions: { totalCount: number } }[],
) => arr?.reduce((acc, r) => acc + (r.repository.isPrivate ? 0 : r.contributions.totalCount), 0);
return {
login: data.user?.login ?? login,
commits: (sumPublic(w?.commitContributionsByRepository) ?? w?.totalCommitContributions) ?? 0,
prs: (sumPublic(w?.pullRequestContributionsByRepository) ?? w?.totalPullRequestContributions) ?? 0,
issues: (sumPublic(w?.issueContributionsByRepository) ?? w?.totalIssueContributions) ?? 0,
rateLimit: data.rateLimit ? { ...data.rateLimit } : undefined,
};
🤖 Prompt for AI Agents
In packages/api/src/providers/github.ts around lines 135 to 142, the code
currently uses only the window totals (w.*) for commits/prs/issues; update it to
compute totals by summing the per-repository contribution arrays when they exist
(e.g., contributionsCollection.commitContributionsByRepository,
pullRequestContributionsByRepository, issueContributionsByRepository), falling
back to w.totalCommitContributions, w.totalPullRequestContributions, and
w.totalIssueContributions if the per-repo arrays are absent or empty; keep login
and rateLimit handling unchanged and ensure null/undefined guards so sums
default to 0.

@@ -0,0 +1,109 @@
#!/usr/bin/env bun
import 'dotenv/config';
import { z } from 'zod/v4';
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix incorrect Zod import.

-import { z } from 'zod/v4';
+import { z } from 'zod';
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
import { z } from 'zod/v4';
import { z } from 'zod';
🤖 Prompt for AI Agents
In scripts/github-test.ts around line 3 the Zod import is incorrect; replace
"import { z } from 'zod/v4';" with "import { z } from 'zod';" so the module
resolves correctly, then run typecheck/tsc to verify no import errors remain.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (2)
apps/web/components/layout/notifications-dropdown.tsx (2)

16-24: Widen createdAt type and coerce before formatting (avoid runtime edge cases).

Some clients may get strings; coerce to Date and avoid brittle “less than” text checks.

 interface NotificationItemProps {
   notification: {
@@
-    createdAt: Date;
+    createdAt: Date | string;
@@
-  const timeAgo = formatDistanceToNow(notification.createdAt, { addSuffix: false });
-  const displayTime = timeAgo.includes('less than') ? 'now' : timeAgo;
+  const created =
+    typeof notification.createdAt === 'string'
+      ? new Date(notification.createdAt)
+      : notification.createdAt;
+  const timeAgo = formatDistanceToNow(created, { addSuffix: false });
+  const displayTime =
+    Math.abs(Date.now() - created.getTime()) < 60_000 ? 'now' : timeAgo;

Also applies to: 64-66


54-61: Fix navigation: use /projects for projectId, add launchId fallback, and return after push.

Project notifications are routed to /launches by mistake; also no launchId fallback and no early returns.

-    // Navigate based on notification type
-    if (notification.data?.projectId) {
-      if (notification.type === 'comment_received') {
-        router.push(`/launches/${notification.data.projectId}#comments`);
-      } else {
-        router.push(`/launches/${notification.data.projectId}`);
-      }
-    }
+    // Navigate: prefer projectId; fallback to launchId
+    if (notification.data?.projectId) {
+      if (notification.type === 'comment_received') {
+        router.push(`/projects/${notification.data.projectId}#comments`);
+        return;
+      }
+      router.push(`/projects/${notification.data.projectId}`);
+      return;
+    }
+    if (notification.data?.launchId) {
+      if (notification.type === 'comment_received') {
+        router.push(`/launches/${notification.data.launchId}#comments`);
+        return;
+      }
+      router.push(`/launches/${notification.data.launchId}`);
+      return;
+    }
🧹 Nitpick comments (3)
apps/web/components/layout/notifications-dropdown.tsx (3)

37-47: Type the icon helper with the notification type union.

Strengthens exhaustiveness and prevents accidental string values.

-  const getNotificationIcon = (type: string) => {
+  const getNotificationIcon = (type: NotificationItemProps['notification']['type']) => {

68-75: Improve accessibility/keyboard support for clickable row.

Use proper semantics or add keyboard handlers to the div.

-    <div
-      onClick={handleClick}
+    <div
+      role="button"
+      tabIndex={0}
+      onClick={handleClick}
+      onKeyDown={(e) => {
+        if (e.key === 'Enter' || e.key === ' ') handleClick();
+      }}
       className={cn(

104-107: Disable “Mark all as read” while pending to prevent double submissions.

Wire up the mutation’s pending state.

 export function NotificationsDropdown() {
-  const { notifications, unreadCount, markAsRead, markAllAsRead, isLoading, hasError } =
-    useNotifications();
+  const {
+    notifications,
+    unreadCount,
+    markAsRead,
+    markAllAsRead,
+    isLoading,
+    hasError,
+    isMarkingAllAsRead,
+  } = useNotifications();
@@
-              <Button variant="ghost" onClick={markAllAsRead} className="gap-2 rounded-none border">
+              <Button
+                variant="ghost"
+                onClick={markAllAsRead}
+                disabled={isMarkingAllAsRead}
+                className="gap-2 rounded-none border"
+              >

Also applies to: 155-159

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between d9a9c57 and a359c9a.

📒 Files selected for processing (4)
  • apps/web/components/comments/reply-form.tsx (1 hunks)
  • apps/web/components/layout/notifications-dropdown.tsx (1 hunks)
  • packages/api/src/routers/profile.ts (4 hunks)
  • packages/db/src/schema/index.ts (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
  • apps/web/components/comments/reply-form.tsx
  • packages/api/src/routers/profile.ts
  • packages/db/src/schema/index.ts
🧰 Additional context used
🧬 Code graph analysis (1)
apps/web/components/layout/notifications-dropdown.tsx (3)
packages/db/src/schema/notifications.ts (1)
  • notification (11-40)
packages/ui/src/lib/utils.ts (1)
  • cn (4-6)
apps/web/hooks/use-notifications.ts (1)
  • useNotifications (8-101)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Vade Review

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (5)
packages/db/drizzle/meta/0024_snapshot.json (3)

2206-2325: Add FK and a composite PK to contrib_rollups for data integrity and simpler upserts.

Without an FK, orphan rows are possible; without a PK, some ORMs/tools struggle. Consider:

  • FK: contrib_rollups.user_id → public.user(id)
  • PK: PRIMARY KEY (user_id, period)

Also consider CHECKs to prevent negative counts and an index to support ORDER BY total DESC per period.

Example follow-up migration:

+ALTER TABLE "contrib_rollups"
+  ADD CONSTRAINT "contrib_rollups_user_fk"
+    FOREIGN KEY ("user_id") REFERENCES "public"."user"("id") ON DELETE CASCADE;
+
+ALTER TABLE "contrib_rollups"
+  ADD CONSTRAINT "contrib_rollups_pk" PRIMARY KEY ("user_id","period");
+
+ALTER TABLE "contrib_rollups"
+  ADD CONSTRAINT "contrib_rollups_nonneg_chk"
+    CHECK (commits >= 0 AND prs >= 0 AND issues >= 0 AND total >= 0);
+
+CREATE INDEX "contrib_rollups_period_total_desc_idx"
+  ON "contrib_rollups" ("period","total" DESC);

2123-2184: Optimize unread notification lookups.

Add a partial index for common “unread for user” queries.

+CREATE INDEX "notification_unread_by_user_idx"
+  ON "notification" ("user_id")
+  WHERE read = false;

2388-2413: Validate necessity of contrib_provider enum.

It’s defined but unused in this snapshot. If not consumed in this PR, either remove to reduce surface area or leave a short rationale in migration notes.

packages/db/drizzle/0024_polite_ghost_rider.sql (2)

41-44: Speed up unread notification queries.

Add a partial index for (user_id, read=false).

 CREATE INDEX "notification_user_id_idx" ON "notification" USING btree ("user_id");--> statement-breakpoint
 CREATE INDEX "notification_read_idx" ON "notification" USING btree ("read");--> statement-breakpoint
 CREATE INDEX "notification_type_idx" ON "notification" USING btree ("type");--> statement-breakpoint
 CREATE INDEX "notification_created_at_idx" ON "notification" USING btree ("created_at" DESC NULLS LAST);--> statement-breakpoint
+CREATE INDEX "notification_unread_by_user_idx"
+  ON "notification" USING btree ("user_id")
+  WHERE read = false;--> statement-breakpoint

3-3: Confirm the need for contrib_provider type now.

It’s defined but unused in this migration. If not required by this PR, consider deferring its creation to reduce churn.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between a359c9a and ba5e908.

📒 Files selected for processing (11)
  • packages/db/drizzle/0022_green_rattler.sql (0 hunks)
  • packages/db/drizzle/0022_premium_zaladane.sql (0 hunks)
  • packages/db/drizzle/0023_real_wither.sql (0 hunks)
  • packages/db/drizzle/0023_spotty_sharon_carter.sql (0 hunks)
  • packages/db/drizzle/0024_free_khan.sql (0 hunks)
  • packages/db/drizzle/0024_polite_ghost_rider.sql (1 hunks)
  • packages/db/drizzle/0025_add_repo_id_index.sql (0 hunks)
  • packages/db/drizzle/meta/0022_snapshot.json (0 hunks)
  • packages/db/drizzle/meta/0023_snapshot.json (0 hunks)
  • packages/db/drizzle/meta/0024_snapshot.json (3 hunks)
  • packages/db/drizzle/meta/_journal.json (1 hunks)
💤 Files with no reviewable changes (8)
  • packages/db/drizzle/meta/0023_snapshot.json
  • packages/db/drizzle/0025_add_repo_id_index.sql
  • packages/db/drizzle/0022_premium_zaladane.sql
  • packages/db/drizzle/meta/0022_snapshot.json
  • packages/db/drizzle/0024_free_khan.sql
  • packages/db/drizzle/0023_spotty_sharon_carter.sql
  • packages/db/drizzle/0023_real_wither.sql
  • packages/db/drizzle/0022_green_rattler.sql
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Vade Review

Comment on lines 23 to 33
CREATE TABLE "contrib_rollups" (
"user_id" text NOT NULL,
"period" "contrib_period" NOT NULL,
"commits" integer DEFAULT 0 NOT NULL,
"prs" integer DEFAULT 0 NOT NULL,
"issues" integer DEFAULT 0 NOT NULL,
"total" integer DEFAULT 0 NOT NULL,
"fetched_at" timestamp with time zone DEFAULT now() NOT NULL,
"updated_at" timestamp with time zone DEFAULT now() NOT NULL
);
--> statement-breakpoint
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Harden contrib_rollups: add PK, FK, non-negative checks, and a ranking index.

These constraints reduce bugs and speed common queries (e.g., export/ranking by period).

 CREATE TABLE "contrib_rollups" (
 	"user_id" text NOT NULL,
 	"period" "contrib_period" NOT NULL,
 	"commits" integer DEFAULT 0 NOT NULL,
 	"prs" integer DEFAULT 0 NOT NULL,
 	"issues" integer DEFAULT 0 NOT NULL,
 	"total" integer DEFAULT 0 NOT NULL,
 	"fetched_at" timestamp with time zone DEFAULT now() NOT NULL,
 	"updated_at" timestamp with time zone DEFAULT now() NOT NULL
 );
 --> statement-breakpoint
+ALTER TABLE "contrib_rollups"
+  ADD CONSTRAINT "contrib_rollups_pk" PRIMARY KEY ("user_id","period");--> statement-breakpoint
+ALTER TABLE "contrib_rollups"
+  ADD CONSTRAINT "contrib_rollups_user_fk"
+    FOREIGN KEY ("user_id") REFERENCES "public"."user"("id") ON DELETE CASCADE;--> statement-breakpoint
+ALTER TABLE "contrib_rollups"
+  ADD CONSTRAINT "contrib_rollups_nonneg_chk"
+    CHECK (commits >= 0 AND prs >= 0 AND issues >= 0 AND total >= 0);--> statement-breakpoint
 CREATE UNIQUE INDEX "contrib_rollups_user_period_uidx" ON "contrib_rollups" USING btree ("user_id","period");--> statement-breakpoint
 CREATE INDEX "contrib_rollups_period_idx" ON "contrib_rollups" USING btree ("period");--> statement-breakpoint
 CREATE INDEX "contrib_rollups_user_idx" ON "contrib_rollups" USING btree ("user_id");
+CREATE INDEX "contrib_rollups_period_total_desc_idx"
+  ON "contrib_rollups" USING btree ("period","total" DESC);--> statement-breakpoint

Also applies to: 45-47

"updated_at" timestamp with time zone DEFAULT now() NOT NULL
);
--> statement-breakpoint
ALTER TABLE "project" ADD COLUMN "repo_id" text NOT NULL;--> statement-breakpoint
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Adding NOT NULL repo_id on a populated table will fail. Stage this change safely.

Add the column nullable, backfill, then enforce NOT NULL in a follow-up migration. Current statement will error if project has rows.

-ALTER TABLE "project" ADD COLUMN "repo_id" text NOT NULL;--> statement-breakpoint
+ALTER TABLE "project" ADD COLUMN "repo_id" text;--> statement-breakpoint
+-- Backfill here if possible, e.g. derive from existing metadata.
+-- UPDATE "project" p SET "repo_id" = <expression> WHERE "repo_id" IS NULL;--> statement-breakpoint
+-- After backfill and verification, enforce NOT NULL in a subsequent migration:
+-- ALTER TABLE "project" ALTER COLUMN "repo_id" SET NOT NULL;--> statement-breakpoint
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
ALTER TABLE "project" ADD COLUMN "repo_id" text NOT NULL;--> statement-breakpoint
ALTER TABLE "project" ADD COLUMN "repo_id" text;--> statement-breakpoint
-- Backfill here if possible, e.g. derive from existing metadata.
-- UPDATE "project" p SET "repo_id" = <expression> WHERE "repo_id" IS NULL;--> statement-breakpoint
-- After backfill and verification, enforce NOT NULL in a subsequent migration:
-- ALTER TABLE "project" ALTER COLUMN "repo_id" SET NOT NULL;--> statement-breakpoint
🤖 Prompt for AI Agents
In packages/db/drizzle/0024_polite_ghost_rider.sql around line 34, the ALTER
TABLE adding "repo_id" as NOT NULL will fail on a populated table; change the
migration to add "repo_id" as nullable (no NOT NULL), run a separate backfill
step to populate repo_id for existing rows (either in this migration after
adding the column or in application code/maintenance job), and then create a
follow-up migration that ALTERs the column to NOT NULL once all rows are
populated; ensure the current file removes the statement-breakpoint marker and
does not enforce NOT NULL until after backfill completes.

Comment on lines 160 to 165
"idx": 24,
"version": "7",
"when": 1756062828591,
"tag": "0022_premium_zaladane",
"breakpoints": true
},
{
"idx": 23,
"version": "7",
"when": 1756063027028,
"tag": "0023_real_wither",
"when": 1756733267727,
"tag": "0024_polite_ghost_rider",
"breakpoints": true
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Do not rewrite migration history: 0022/0023 removed and replaced by 0024.

Removing prior entries can break environments that already applied them and cause drift/conflicts. Keep 0022 and 0023 in the journal and append 0024 as the next entry; if they’re intentionally superseded, ship forward-only migrations that adjust schema rather than rewriting history. Update PR notes to explain the sequence.

If you want, I can generate a script to diff the current DB’s drizzle journal vs repo to assess drift.


🏁 Script executed:

#!/bin/bash
# Searching for entries 0022 and 0023 in the migration journal
rg -n '"tag":\s*"0022_' packages/db/drizzle/meta/_journal.json || echo "0022 not found"
rg -n '"tag":\s*"0023_' packages/db/drizzle/meta/_journal.json || echo "0023 not found"

Length of output: 207


🏁 Script executed:

#!/bin/bash
echo "Journal entries:"
rg -n '"tag":' packages/db/drizzle/meta/_journal.json || true
echo

echo "List of migration SQL files in packages/db/drizzle:"
ls -1 packages/db/drizzle/*.sql || echo "No .sql files found"
echo

echo "Find migration files starting with 0022 or 0023:"
find packages/db/drizzle -type f \( -name '0022*' -o -name '0023*' \) || echo "None found"

Length of output: 3672


Restore missing migrations 0022 and 0023
0022 and 0023 have been removed from packages/db/drizzle/meta/_journal.json and no corresponding .sql files exist, which rewrites history and risks breaking applied environments. Re-add the original 0022 and 0023 journal entries and SQL files, then append 0024 as the next migration. Update the PR notes to explain this sequence.

🤖 Prompt for AI Agents
In packages/db/drizzle/meta/_journal.json around lines 160-165, entries for
migrations 0022 and 0023 were removed and 0024 was added directly, which
rewrites history; restore the original journal objects for 0022 and 0023 (with
their original idx, version, when, tag, breakpoints and any other fields) and
re-create their corresponding .sql migration files under
packages/db/drizzle/migrations with the original filenames and SQL contents,
then update the 0024 entry to be the next sequential migration (idx incremented
accordingly) so it becomes 0024 after restored entries; finally, update the PR
notes to explain that 0022 and 0023 were restored and 0024 appended to preserve
history and applied environments.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

♻️ Duplicate comments (1)
packages/db/drizzle/meta/0022_snapshot.json (1)

2319-2324: Add FK: contrib_rollups.user_id → public.user.id (cascade on delete).

Missing FK risks orphaned rows and broken joins. Add the reference in the Drizzle schema and regenerate snapshots.

Apply in packages/db/src/schema/contributions.ts:

-  userId: text('user_id').notNull(),
+  userId: text('user_id')
+    .notNull()
+    .references(() => user.id, { onDelete: 'cascade' }),
🧹 Nitpick comments (7)
packages/db/drizzle/meta/0022_snapshot.json (3)

2206-2249: Prefer a real PK: use (user_id, period) as composite primary key.

You currently rely on a unique index; defining a PK improves integrity and planner hints. Remove the redundant unique index afterward.

Apply in packages/db/src/schema/contributions.ts:

 export const contribRollups = pgTable('contrib_rollups', {
   // columns...
 }, (t) => ({
-  userPeriodUidx: uniqueIndex('contrib_rollups_user_period_uidx').on(t.userId, t.period),
+  pk: primaryKey({ columns: [t.userId, t.period] }),
 }));

2251-2264: Ensure updated_at auto-updates on row changes.

Default now() doesn’t update on UPDATE. Add an on-update hook or a DB trigger.

Drizzle suggestion (Postgres):

-  updatedAt: timestamp('updated_at', { withTimezone: true }).defaultNow(),
+  updatedAt: timestamp('updated_at', { withTimezone: true })
+    .defaultNow()
+    .$onUpdate(() => sql`now()`),

If Drizzle version lacks $onUpdate for PG, add a trigger:

CREATE OR REPLACE FUNCTION set_updated_at() RETURNS trigger AS $$
BEGIN NEW.updated_at = now(); RETURN NEW; END; $$ LANGUAGE plpgsql;
CREATE TRIGGER trg_contrib_rollups_updated_at
BEFORE UPDATE ON contrib_rollups FOR EACH ROW EXECUTE FUNCTION set_updated_at();

2223-2250: Add non-negative checks (and consider deriving total).

Prevent bad data: commits/prs/issues/total should be >= 0. If total is intended as commits+prs+issues, enforce it.

Drizzle table checks:

}, (t) => ({
  nonNeg: check('contrib_rollups_non_negative', sql`${t.commits} >= 0 AND ${t.prs} >= 0 AND ${t.issues} >= 0 AND ${t.total} >= 0`),
  // If applicable:
  // totalEq: check('contrib_rollups_total_eq', sql`${t.total} = ${t.commits} + ${t.prs} + ${t.issues}`),
}));
packages/db/drizzle/0022_huge_piledriver.sql (4)

23-32: Enforce non-negative counts and consider generating total.

Prevent bad data; optionally derive total to avoid drift.

Add constraints (safe):

+ALTER TABLE "contrib_rollups"
+  ADD CONSTRAINT "contrib_rollups_nonneg_chk"
+  CHECK ("commits" >= 0 AND "prs" >= 0 AND "issues" >= 0 AND "total" >= 0);

Or make total generated (only if writes to total are removed):

-  "total" integer DEFAULT 0 NOT NULL,
+  -- drop then re-add as generated
+ALTER TABLE "contrib_rollups" DROP COLUMN "total";
+ALTER TABLE "contrib_rollups"
+  ADD COLUMN "total" integer GENERATED ALWAYS AS ("commits" + "prs" + "issues") STORED;

45-47: Add leaderboard-friendly index on (period, total DESC).

Sorting by total within a period benefits from a composite index.

Add:

+CREATE INDEX "contrib_rollups_period_total_desc_idx"
+ON "contrib_rollups" USING btree ("period", "total" DESC);

11-21: Auto-update updated_at; optimize unread lookups.

updated_at never changes automatically; unread queries are common.

Add trigger and partial index:

+CREATE OR REPLACE FUNCTION set_updated_at() RETURNS trigger AS $$
+BEGIN NEW.updated_at = now(); RETURN NEW; END; $$ LANGUAGE plpgsql;
+CREATE TRIGGER notification_set_updated_at
+BEFORE UPDATE ON "notification" FOR EACH ROW EXECUTE FUNCTION set_updated_at();
+
+CREATE INDEX "notification_unread_by_user_idx"
+ON "notification" ("user_id")
+WHERE "read" = false;

Repeat trigger for contrib_rollups if you update rows in place.


3-3: Unused enum contrib_provider.

If not used in this migration set, defer creation to the first consumer to reduce schema noise.

Drop for now (or keep with a comment in snapshot):

-DROP TYPE IF EXISTS "public"."contrib_provider";
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between ba5e908 and 18742b4.

📒 Files selected for processing (3)
  • packages/db/drizzle/0022_huge_piledriver.sql (1 hunks)
  • packages/db/drizzle/meta/0022_snapshot.json (3 hunks)
  • packages/db/drizzle/meta/_journal.json (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • packages/db/drizzle/meta/_journal.json
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Vade Review
🔇 Additional comments (4)
packages/db/drizzle/meta/0022_snapshot.json (2)

2-3: Snapshot header update looks fine.

IDs updated as expected for a new snapshot.


2396-2413: Confirm enum values vs product scope.

contrib_period includes all_time but PR scope/UI mention last_30d/last_365d. If all_time isn’t used, consider removing to avoid dead code/branches (or add it end-to-end).

Do you plan to expose “All time” soon? If not, I can prep a follow-up PR to trim it and update API validation.

packages/db/drizzle/0022_huge_piledriver.sql (2)

2-2: Period enum includes all_time but APIs/Redis use only 30d/365d.

Confirm all_time won’t be accidentally populated or exposed until supported.

Would you like me to scan API handlers for any all_time references and open a follow-up task?


4-10: project_comment_like table looks solid.

PK + uniqueness per (comment_id,user_id) and FKs with cascade are appropriate; indexes match common access paths.

Comment on lines +23 to +32
CREATE TABLE "contrib_rollups" (
"user_id" text NOT NULL,
"period" "contrib_period" NOT NULL,
"commits" integer DEFAULT 0 NOT NULL,
"prs" integer DEFAULT 0 NOT NULL,
"issues" integer DEFAULT 0 NOT NULL,
"total" integer DEFAULT 0 NOT NULL,
"fetched_at" timestamp with time zone DEFAULT now() NOT NULL,
"updated_at" timestamp with time zone DEFAULT now() NOT NULL
);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

No FK from contrib_rollups.user_id → user(id).

Orphans are possible and Redis sync may surface deleted users.

Add FK:

+ALTER TABLE "contrib_rollups"
+  ADD CONSTRAINT "contrib_rollups_user_id_user_id_fk"
+  FOREIGN KEY ("user_id") REFERENCES "public"."user"("id")
+  ON DELETE CASCADE ON UPDATE NO ACTION;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
CREATE TABLE "contrib_rollups" (
"user_id" text NOT NULL,
"period" "contrib_period" NOT NULL,
"commits" integer DEFAULT 0 NOT NULL,
"prs" integer DEFAULT 0 NOT NULL,
"issues" integer DEFAULT 0 NOT NULL,
"total" integer DEFAULT 0 NOT NULL,
"fetched_at" timestamp with time zone DEFAULT now() NOT NULL,
"updated_at" timestamp with time zone DEFAULT now() NOT NULL
);
CREATE TABLE "contrib_rollups" (
"user_id" text NOT NULL,
"period" "contrib_period" NOT NULL,
"commits" integer DEFAULT 0 NOT NULL,
"prs" integer DEFAULT 0 NOT NULL,
"issues" integer DEFAULT 0 NOT NULL,
"total" integer DEFAULT 0 NOT NULL,
"fetched_at" timestamp with time zone DEFAULT now() NOT NULL,
"updated_at" timestamp with time zone DEFAULT now() NOT NULL
);
ALTER TABLE "contrib_rollups"
ADD CONSTRAINT "contrib_rollups_user_id_user_id_fk"
FOREIGN KEY ("user_id") REFERENCES "public"."user"("id")
ON DELETE CASCADE
ON UPDATE NO ACTION;
🤖 Prompt for AI Agents
In packages/db/drizzle/0022_huge_piledriver.sql around lines 23–32, the
contrib_rollups.user_id column lacks a foreign key to user(id), allowing
orphaned rollups; remove orphans first (DELETE FROM contrib_rollups WHERE
user_id NOT IN (SELECT id FROM "user");) then add a foreign key constraint
linking contrib_rollups.user_id to "user"(id) with ON DELETE CASCADE so rollups
are removed when a user is deleted (ALTER TABLE ... ADD CONSTRAINT ... FOREIGN
KEY (...) REFERENCES "user"(id) ON DELETE CASCADE).

🛠️ Refactor suggestion

contrib_rollups lacks a primary key; rely on PK over UNIQUE for ORM/upsert efficiency.

Add a composite PK (user_id, period) and keep period/user indexes.

Proposed migration:

-CREATE UNIQUE INDEX "contrib_rollups_user_period_uidx" ON "contrib_rollups" USING btree ("user_id","period");
+ALTER TABLE "contrib_rollups" ADD CONSTRAINT "contrib_rollups_pkey" PRIMARY KEY ("user_id","period");
+DROP INDEX IF EXISTS "contrib_rollups_user_period_uidx";

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In packages/db/drizzle/0022_huge_piledriver.sql around lines 23-32, the
contrib_rollups table lacks a primary key which hurts ORM/upsert efficiency;
modify the CREATE TABLE to declare a composite PRIMARY KEY (user_id, period) and
retain/explicitly create any separate indexes you want on period and user_id if
required for queries; ensure the migration uses PRIMARY KEY (user_id, period)
rather than a UNIQUE constraint so ORM upserts use the PK and adjust any
dependent constraints or code that assumed no PK.

"updated_at" timestamp with time zone DEFAULT now() NOT NULL
);
--> statement-breakpoint
ALTER TABLE "project" ADD COLUMN "repo_id" text NOT NULL;--> statement-breakpoint
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Adding NOT NULL column to populated table will lock/fail migrations.

ALTER TABLE project ADD COLUMN repo_id text NOT NULL will block on large tables and fail if rows exist. Use a 3-step safe pattern.

Apply in a follow-up migration:

-ALTER TABLE "project" ADD COLUMN "repo_id" text NOT NULL;
+-- 1) Add nullable
+ALTER TABLE "project" ADD COLUMN "repo_id" text;
+-- 2) Backfill (example placeholder; replace with real derivation)
+-- UPDATE "project" p SET "repo_id" = COALESCE(p.repo_id, md5(p.id::text));
+-- 3) Enforce NOT NULL
+ALTER TABLE "project" ALTER COLUMN "repo_id" SET NOT NULL;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
ALTER TABLE "project" ADD COLUMN "repo_id" text NOT NULL;--> statement-breakpoint
-- 1) Add nullable
ALTER TABLE "project" ADD COLUMN "repo_id" text;
-- 2) Backfill (example placeholder; replace with real derivation)
-- UPDATE "project" p SET "repo_id" = COALESCE(p.repo_id, md5(p.id::text));
-- 3) Enforce NOT NULL
ALTER TABLE "project" ALTER COLUMN "repo_id" SET NOT NULL;
🤖 Prompt for AI Agents
In packages/db/drizzle/0022_huge_piledriver.sql around line 34, the migration
adds repo_id as NOT NULL which will lock or fail on populated/large tables;
instead split into a safe 3-step migration: 1) add the column as nullable (ALTER
TABLE ... ADD COLUMN repo_id text), 2) backfill existing rows in a separate
transaction (UPDATE project SET repo_id = <value or computed value> WHERE
repo_id IS NULL) or set a non-blocking default for new rows, and 3) in a
follow-up migration set the column NOT NULL (ALTER TABLE ... ALTER COLUMN
repo_id SET NOT NULL) only after backfill completes; implement these steps
across two migrations (add+backfill, then enforce NOT NULL) and avoid setting
NOT NULL in the initial migration.

Comment on lines +2267 to +2317
"contrib_rollups_user_period_uidx": {
"name": "contrib_rollups_user_period_uidx",
"columns": [
{
"expression": "user_id",
"isExpression": false,
"asc": true,
"nulls": "last"
},
{
"expression": "period",
"isExpression": false,
"asc": true,
"nulls": "last"
}
],
"isUnique": true,
"concurrently": false,
"method": "btree",
"with": {}
},
"contrib_rollups_period_idx": {
"name": "contrib_rollups_period_idx",
"columns": [
{
"expression": "period",
"isExpression": false,
"asc": true,
"nulls": "last"
}
],
"isUnique": false,
"concurrently": false,
"method": "btree",
"with": {}
},
"contrib_rollups_user_idx": {
"name": "contrib_rollups_user_idx",
"columns": [
{
"expression": "user_id",
"isExpression": false,
"asc": true,
"nulls": "last"
}
],
"isUnique": false,
"concurrently": false,
"method": "btree",
"with": {}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add leaderboard index: (period ASC, total DESC) for fast top-N.

Typical queries ORDER BY total DESC filtered by period; this avoids wide sorts.

Apply in packages/db/src/schema/contributions.ts:

 import { desc } from 'drizzle-orm';
 // ...
 export const contribRollups = pgTable('contrib_rollups', { /* cols */ }, (t) => ({
+  topByPeriod: index('contrib_rollups_top_by_period_idx').on(t.period, desc(t.total)),
 }));

Optional: partial indexes for hot periods

CREATE INDEX CONCURRENTLY contrib_rollups_top_30d_idx ON contrib_rollups (total DESC) WHERE period = 'last_30d';
CREATE INDEX CONCURRENTLY contrib_rollups_top_365d_idx ON contrib_rollups (total DESC) WHERE period = 'last_365d';

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add global contributor leaderboard based on total GitHub/GitLab contributions

1 participant