fix oauth callback and integrate istio gateway with cert-manager for prod #25
fix oauth callback and integrate istio gateway with cert-manager for prod #25sanjay7178 wants to merge 2 commits intomainfrom
Conversation
… generation Signed-off-by: sanjay7178 <saisanjay7660@gmail.com>
Signed-off-by: sanjay7178 <saisanjay7660@gmail.com>
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 12
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (11)
docs/CONTRIBUTING.md (2)
136-140: Outdated path in Project Structure (still references GUAC).Update
/cmd/guac/to the current binary path for KubeBrowse to avoid confusing contributors.- - `/cmd/guac/` - Main application entry point + - `/cmd/kubebrowse/` - Main application entry point
153-174: Docs commit types don’t match the enforced regex in lefthook.The hook allows: feat, fix, docs, style, refactor, perf, test, build, ci, chore, revert. The list here omits several types. Align to prevent false failures.
Types include: - `feat`: A new feature - `fix`: A bug fix - `docs`: Documentation changes - `style`: Code style changes (formatting, etc.) - `refactor`: Code changes that neither fix bugs nor add features - `test`: Adding or modifying tests -- `chore`: Changes to the build process or auxiliary tools +- `perf`: Performance improvements +- `build`: Build system or external dependencies +- `ci`: CI configuration and scripts +- `chore`: Changes to the build process or auxiliary tools +- `revert`: Reverts a previous commitinternal/auth/handlers.go (5)
262-267: Do not log OAuth authorization codes.Authorization codes are sensitive. Remove or fully redact this log.
Apply:
- logrus.Infof("OAuth callback - Code: %s...", codeParam[:10]) + logrus.Debug("OAuth callback - Code received (redacted)")
571-596: Local cookie domain 'localhost' can break; also set SameSite.Many browsers ignore Domain=localhost; use host-only cookies (empty domain) locally and set SameSite=Lax to reduce CSRF risk.
Apply:
- if strings.Contains(host, "localhost") || strings.Contains(host, "127.0.0.1") { - domain = "localhost" - // Check if the request is coming through HTTPS (even in development) - secure = c.Request.TLS != nil || c.GetHeader("X-Forwarded-Proto") == "https" + if strings.Contains(host, "localhost") || strings.Contains(host, "127.0.0.1") { + // Use host-only cookie locally for better browser compatibility + domain = "" + secure = c.Request.TLS != nil || strings.EqualFold(c.GetHeader("X-Forwarded-Proto"), "https") } else { // For production, extract domain from host if strings.Contains(host, ":") { domain = strings.Split(host, ":")[0] } else { domain = host } secure = true // HTTPS for production } logrus.Debugf("Setting session cookie for host: %s, domain: %s, secure: %v", host, domain, secure) + // Prefer Lax to allow OAuth GET redirects, while mitigating CSRF + c.SetSameSite(http.SameSiteLaxMode) c.SetCookie( SessionCookieName, sessionToken, 7*24*60*60, // 7 days "/", // Path - root path to ensure it's sent with all requests - domain, // Domain - set to localhost for local development - secure, // Secure - false for local development, true for production + domain, // Domain + secure, // Secure true, // httpOnly )
65-72: Gothic session cookie is configured insecurely (HttpOnly=false, Secure=false).This cookie should never be JS-accessible and must be Secure in production. Also set SameSite.
Apply:
- store := sessions.NewCookieStore([]byte(sessionSecret)) + store := sessions.NewCookieStore([]byte(sessionSecret)) store.MaxAge(3600) store.Options.Path = "/" - store.Options.HttpOnly = false // Allow JavaScript access for debugging - store.Options.Secure = false // Allow HTTP cookies (important for development) - store.Options.Domain = "" // Use default domain + store.Options.HttpOnly = true + // In dev you may run over http; in prod always secure. + store.Options.Secure = isProduction + store.Options.Domain = "" + store.Options.SameSite = http.SameSiteLaxMode
404-406: Don’t log session tokens.Leaking session tokens in logs is a high-severity issue.
Apply:
- logrus.Infof("Session cookie set for user %s with token: %s", user.Email, session.SessionToken) + logrus.Infof("Session cookie set for user %s", user.Email)
685-695: Cookie clearing may fail if Domain/Secure don’t match the set cookie.Mirror setSessionCookie’s domain/secure logic when clearing, otherwise cookies can persist.
Apply:
-func (h *Handler) clearSessionCookie(c *gin.Context) { - c.SetCookie( - SessionCookieName, - "", - -1, - "/", - "", - false, - true, - ) -} +func (h *Handler) clearSessionCookie(c *gin.Context) { + host := c.Request.Host + var domain string + var secure bool + if strings.Contains(host, "localhost") || strings.Contains(host, "127.0.0.1") { + domain = "" + secure = c.Request.TLS != nil || strings.EqualFold(c.GetHeader("X-Forwarded-Proto"), "https") + } else { + if strings.Contains(host, ":") { + domain = strings.Split(host, ":")[0] + } else { + domain = host + } + secure = true + } + c.SetSameSite(http.SameSiteLaxMode) + c.SetCookie(SessionCookieName, "", -1, "/", domain, secure, true) +}deployments/manifest.yml (4)
341-361: Hardcoded secrets in manifest (OAuth, session, SMTP). Move to Kubernetes Secrets immediately.Client secret, session secret, and SMTP password are committed in plain text.
Apply:
- - name: GITHUB_CLIENT_ID - value: "Ov23liQQQ5lMyIrOULYx" - - name: GITHUB_CLIENT_SECRET - value: "47afea55e0e19231e1cc82c3a57d56aaeedb480e" + - name: GITHUB_CLIENT_ID + valueFrom: { secretKeyRef: { name: oauth-secrets, key: github_client_id } } + - name: GITHUB_CLIENT_SECRET + valueFrom: { secretKeyRef: { name: oauth-secrets, key: github_client_secret } } ... - - name: SESSION_SECRET - value: "McX0xoYlJYpO0EGdUZC6aJxVh2rKYrXZM7SPMpVREeXMXoMBI20v90PICD-LfEn7jr07c5vH2CWcKzjl8hoNvQ" + - name: SESSION_SECRET + valueFrom: { secretKeyRef: { name: oauth-secrets, key: session_secret } } ... - - name: SMTP_USERNAME - value: "sanjay.obsidian@gmail.com" - - name: SMTP_PASSWORD - value: "ulagdeliigobhdgr" + - name: SMTP_USERNAME + valueFrom: { secretKeyRef: { name: smtp-secrets, key: username } } + - name: SMTP_PASSWORD + valueFrom: { secretKeyRef: { name: smtp-secrets, key: password } }Add secrets (apply separately):
apiVersion: v1 kind: Secret metadata: name: oauth-secrets namespace: browser-sandbox type: Opaque stringData: github_client_id: "<redacted>" github_client_secret: "<redacted>" session_secret: "<redacted>" --- apiVersion: v1 kind: Secret metadata: name: smtp-secrets namespace: browser-sandbox type: Opaque stringData: username: "<redacted>" password: "<redacted>"
381-386: API Service should be ClusterIP behind Istio Gateway.Exposing NodePort is unnecessary and widens attack surface when using Istio.
Apply:
-spec: - type: NodePort +spec: + type: ClusterIP ports: - port: 4567 targetPort: 4567 - nodePort: 30006
514-520: Frontend points to cluster-internal.svc.cluster.localURL — browsers can’t resolve that.These URLs will be bundled into client JS and fail in users’ browsers. Point to the public host (Gateway/Ingress) or use a relative path via the same origin.
Apply one of:
- - name: VITE_GUAC_CLIENT_URL - value: "https://browser-sandbox-api.browser-sandbox.svc.cluster.local:4567" + - name: VITE_GUAC_CLIENT_URL + value: "httpss://api.<your-domain>" - name: GUAC_CLIENT_URL - value: "https://browser-sandbox-api.browser-sandbox.svc.cluster.local:4567" + value: "httpss://api.<your-domain>" - name: CADDY_GUAC_CLIENT_URL - value: "https://browser-sandbox-api.browser-sandbox.svc.cluster.local:4567" + value: "httpss://api.<your-domain>"Or, if front and API share a host: set these to relative paths (e.g.,
/api).
545-551: Fix headless Service selector to match pod labels. Selectorapp: browser-sandbox-test(deployments/manifest.yml:547) matches no pods—existing Deployments useapp: browser-sandbox-api. Change the selector or update pod labels accordingly.
🧹 Nitpick comments (20)
docs/CONTRIBUTING.md (3)
93-111: Safer cache cleanup and avoid hardcoding Go patch versions.
- Prefer
go envlocations over deleting legacy paths.- Avoid pinning a specific patch (e.g., 1.24.3) in docs; instruct users to install the version required by go.mod or latest stable for their OS/arch.
-4. **Reset the build system** - -```bash -rm -rf $GOPATH/pkg/mod/cache/build -``` +4. **Reset the build system** + +```bash +# Clean build and module caches via the Go tool +go clean -cache -modcache +# (Optional) Manually clear OS cache dir if needed +rm -rf "$(go env GOCACHE)" 2>/dev/null || true +``` -5. **Install or update to a consistent Go version** -Using Go's official installation: -```bash -# Download the latest version -wget https://go.dev/dl/go1.24.3.linux-amd64.tar.gz -# Remove old installation (if needed) -sudo rm -rf /usr/local/go -# Install the new version -sudo tar -C /usr/local -xzf go1.24.3.linux-amd64.tar.gz -``` +5. **Install or update to a consistent Go version** +Using Go's official installation: +```bash +# Download the required version for your OS/arch from https://go.dev/dl/ +# Example: +VERSION=1.24.x # match go.mod or latest stable +OS=linux; ARCH=amd64 +wget "https://go.dev/dl/go${VERSION}.${OS}-${ARCH}.tar.gz" +sudo rm -rf /usr/local/go +sudo tar -C /usr/local -xzf "go${VERSION}.${OS}-${ARCH}.tar.gz" +```
121-126: Don’t set GOROOT unless you really need to.Setting GOROOT can conflict with version managers. PATH is enough for standard installs.
-# Add to your shell profile (.bashrc, .zshrc, etc.) -export GOROOT=/usr/local/go -export PATH=$GOROOT/bin:$PATH +# Add to your shell profile (.bashrc, .zshrc, etc.) +export PATH=/usr/local/go/bin:$PATH +# Tip: Avoid setting GOROOT; the toolchain sets it automatically.
239-245: Minor: clarify “lint” vs “lint-all” scope.Consider explicitly stating that “lint” checks staged files only and “lint-all” runs across the repo, to match the commands.
-# Check only staged files +# Check only staged files (pre-commit scope) make lint -# Check all files in the repository +# Check all files in the repository make lint-alllefthook.yml (1)
56-69: Harden DCO check and remove extra blank line.
- Grep the file directly and validate the trailer shape; be lenient on case and leading spaces.
- Remove the trailing blank line flagged by yamllint.
- dco-check: - run: | - # if no arg was passed, fallback to the usual commit-msg file - file="${1:-$(git rev-parse --git-dir)/COMMIT_EDITMSG}" - message="$(cat "$file")" - - if ! echo "$message" | grep -q "Signed-off-by:"; then + dco-check: + run: | + # if no arg was passed, fallback to the usual commit-msg file + file="${1:-$(git rev-parse --git-dir)/COMMIT_EDITMSG}" + # Require a Signed-off-by trailer (case-insensitive), allow leading spaces + if ! grep -qiE '^\s*Signed-off-by:\s+.+<[^>]+>\s*$' "$file"; then echo "DCO check failed!" echo "Commit message must include a 'Signed-off-by' line." echo "To fix this, commit with: git commit -s" echo "Or amend your commit: git commit --amend -s" exit 1 fi - +``` </blockquote></details> <details> <summary>deployments/isito.yml (1)</summary><blockquote> `1-1`: **Typo: file name is “isito.yml”.** Consider renaming to “istio.yml” to avoid confusion and broken references in docs/scripts. </blockquote></details> <details> <summary>Dockerfile (1)</summary><blockquote> `47-49`: **Tighten perms on /app/certs (secrets) and keep Go toolchain out of prod image.** - Set restrictive perms on cert dir. - Optional: build a dev-only image with Go/bash; ship a minimal prod image to reduce surface area. Apply this diff: ```diff -# Create certificates directory (certificates will be generated by init container) -RUN mkdir -p /app/certs +# Create certificates directory (init container writes certs) +RUN mkdir -p /app/certs && chmod 0750 /app/certsAlso, remove the trailing space in GUACD_ADDRESS (line 52) to avoid subtle env parsing issues:
ENV GUACD_ADDRESS=guacd:4822internal/auth/service.go (1)
171-191: Minor: downgrade PII logs to Debug.
Infofwith email/IDs can be noisy/PII. Prefer Debug for normal flows; keep Warn/Error for exceptional cases.internal/auth/handlers.go (1)
223-246: Hardcoded provider “github” despite dynamic route param.Either validate only “github” is supported and reject others, or use the param consistently to fetch the provider.
frontend/src/context/AuthContext.jsx (4)
90-103: Route check: handle nested success paths.Use startsWith to cover variations like “/auth/success/…”.
Apply:
- if (window.location.pathname !== '/auth/success') { + if (!window.location.pathname.startsWith('/auth/success')) {
106-113: Allow bypassing rate limit when needed (e.g., post-profile update).Add a force flag to skip throttling.
Apply:
- const checkAuthStatus = async () => { + const checkAuthStatus = async (opts = {}) => { + const { force = false } = opts; // Rate limiting to prevent excessive API calls const now = Date.now(); - if (now - lastAuthCheck.current < RATE_LIMIT_MS) { + if (!force && now - lastAuthCheck.current < RATE_LIMIT_MS) { console.log('Auth check rate limited, skipping'); return; }Then call
checkAuthStatus({ force: true })after known state changes (e.g., in ProfileRoute).
130-141: Gate PII-heavy logging to development.Avoid logging detailed user fields in production.
Apply:
- console.log('User object details:', { + if (process.env.NODE_ENV === 'development') console.log('User object details:', { id: data.user?.id, email: data.user?.email, username: data.user?.username, name: data.user?.name, provider: data.user?.provider, hasId: !!data.user?.id, hasEmail: !!data.user?.email, hasUsername: !!data.user?.username, hasName: !!(data.user?.name && data.user.name.trim() !== ''), isEmpty: Object.keys(data.user || {}).length === 0 });
181-186: Optional: keep error in catch for diagnostics.Dropping the error can hinder debugging; consider
catch (err) { … }.Also applies to: 212-216, 264-268, 294-298
frontend/src/routes/AuthSuccessRoute.jsx (3)
36-57: Single-source the success redirect to avoid duplicate timers/toasts.Both this effect and the polling effect schedule redirects; centralize to one place to prevent race-y timers and repeated navigation on rapid auth flips.
I can propose a small refactor to route all redirects through the polling effect and keep this effect toast-only if you prefer.
156-164: Reduce toast noise during retries.Emitting a new toast on every attempt can spam users. Use a single loading toast and update/dismiss it.
Example:
- toast(`Authentication attempt ${attempt}/${maxAttempts}...`, { ... }); + if (!window.__authToastId) { + window.__authToastId = toast.loading('Authenticating...'); + } else { + toast(`Attempt ${attempt}/${maxAttempts}...`, { id: window.__authToastId }); + }And on success/final failure:
toast.dismiss(window.__authToastId); window.__authToastId = null;
63-64: Nit: renamepollIntervaltopollTimer.It holds a timeout ID, not an interval. Improves clarity.
deployments/manifest.yml (5)
369-369: Harden container securityContext.Empty securityContext misses common hardening.
Apply:
- securityContext: {} + securityContext: + runAsNonRoot: true + allowPrivilegeEscalation: false + readOnlyRootFilesystem: true + seccompProfile: { type: RuntimeDefault } + capabilities: { drop: ["ALL"] }
335-338: Jaeger endpoint is a public IP over HTTP.Prefer a stable DNS name and mTLS within the mesh (or an agent/OTel collector). External IP + HTTP risks data exposure.
Example:
- - name: JAEGER_ENDPOINT - value: "http://152.53.244.80:14268/api/traces" + - name: JAEGER_ENDPOINT + value: "http://jaeger-collector.observability.svc.cluster.local:14268/api/traces"
291-292: Avoid embedding DB credentials inDATABASE_URL.This duplicates secrets in plain text. Construct the URL from secret-backed env vars in the app instead.
If you must keep it here, source it from a Secret:
- - name: DATABASE_URL - value: "postgres://postgresuser:postgrespassword@postgres.browser-sandbox.svc.cluster.local:5432/sandbox_db?sslmode=disable" + - name: DATABASE_URL + valueFrom: { secretKeyRef: { name: postgres-secret, key: url } }And add
urltopostgres-secret.
136-136: Pin image tag for guacd.
latestis non-reproducible; pin to a known version.- image: guacamole/guacd:latest + image: guacamole/guacd:1.5.5
179-179: Pin image tag for MinIO.Avoid
latestto ensure deterministic deploys.- image: minio/minio:latest + image: minio/minio:RELEASE.2025-07-08T00-00-00Z
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (12)
Dockerfile(1 hunks)Tiltfile(1 hunks)deployments/isito.yml(1 hunks)deployments/manifest.yml(3 hunks)docs/CONTRIBUTING.md(7 hunks)frontend/Caddyfile.dev(1 hunks)frontend/src/components/auth/LoginForm.jsx(2 hunks)frontend/src/context/AuthContext.jsx(7 hunks)frontend/src/routes/AuthSuccessRoute.jsx(2 hunks)internal/auth/handlers.go(3 hunks)internal/auth/service.go(2 hunks)lefthook.yml(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (5)
internal/auth/service.go (1)
db/sqlc/user.sql.go (5)
GetUserByProviderParams(348-351)UpdateUserParams(491-495)
frontend/Caddyfile.dev (1)
frontend/vite.config.js (1)
proxyRes(63-75)
frontend/src/components/auth/LoginForm.jsx (3)
frontend/src/components/auth/RegisterForm.jsx (1)
RegisterForm(10-196)frontend/src/routes/EmailVerificationSuccessRoute.jsx (2)
EmailVerificationSuccessRoute(7-75)toast(11-39)frontend/src/routes/PasswordChangeRoute.jsx (2)
PasswordChangeRoute(11-234)e(49-88)
frontend/src/routes/AuthSuccessRoute.jsx (2)
frontend/src/context/AuthContext.jsx (1)
checkAuthStatus(105-151)frontend/src/routes/EmailVerificationSuccessRoute.jsx (1)
EmailVerificationSuccessRoute(7-75)
frontend/src/context/AuthContext.jsx (2)
frontend/src/routes/PasswordChangeRoute.jsx (2)
e(49-88)PasswordChangeRoute(11-234)frontend/src/routes/ProfileRoute.jsx (1)
ProfileRoute(11-232)
🪛 LanguageTool
docs/CONTRIBUTING.md
[grammar] ~19-~19: There might be a mistake here.
Context: ...mon-issues) - Go Version Mismatch - Project Structure ...
(QB_NEW_EN)
[grammar] ~20-~20: There might be a mistake here.
Context: ...rsion-mismatch) - Project Structure - [Development Workflow](#development-workf...
(QB_NEW_EN)
[grammar] ~21-~21: There might be a mistake here.
Context: ...ect-structure) - Development Workflow - Branching Strategy...
(QB_NEW_EN)
[grammar] ~22-~22: There might be a mistake here.
Context: ...ment-workflow) - Branching Strategy - Commit Guidelines ...
(QB_NEW_EN)
[grammar] ~23-~23: There might be a mistake here.
Context: ...ching-strategy) - Commit Guidelines - [Conventional Commits Format](#convention...
(QB_NEW_EN)
[grammar] ~24-~24: There might be a mistake here.
Context: ...es) - Conventional Commits Format - [DCO Sign-off Requirement](#dco-sign-off-...
(QB_NEW_EN)
[grammar] ~25-~25: There might be a mistake here.
Context: ...ormat) - DCO Sign-off Requirement - Example Commit - [...
(QB_NEW_EN)
[grammar] ~26-~26: There might be a mistake here.
Context: ...off-requirement) - Example Commit - Pre-commit Hooks -...
(QB_NEW_EN)
🪛 YAMLlint (1.37.1)
lefthook.yml
[warning] 69-69: too many blank lines (1 > 0)
(empty-lines)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Build and Test
- GitHub Check: Security Scan
- GitHub Check: Cloudflare Pages: kubebrowse-chore-improve-back
🔇 Additional comments (13)
docs/CONTRIBUTING.md (4)
175-206: DCO instructions: clear and actionable.Good coverage: automatic sign-off, amending commits, and sample trailer.
251-258: Branch/PR flow may be inconsistent with current practice.This PR targets main, while the doc says PRs should target dev. Confirm intended flow and update docs accordingly.
I can PR a quick fix once you confirm the target branch policy.
9-9: Great intro tone and project rename cleanup.The KubeBrowse branding and welcoming copy read well.
299-299: Nice closing line.Consistent with the project rename.
lefthook.yml (2)
50-54: Docs/regex alignment.Your regex allows: feat, fix, docs, style, refactor, perf, test, build, ci, chore, revert. Ensure docs list all of these (docs currently omit several).
56-56: Solid addition: enforcing DCO at commit time.This will save PR review cycles.
Tiltfile (1)
33-38: No action required:browser-sandbox-frontendis already loaded viak8s_yamlondeployments/manifest.yml.deployments/isito.yml (1)
49-57: Apply authority matcher replacement; service exists on port 80frontend/Caddyfile.dev (1)
11-23: Deduplicate auth proxies and verify dev cookie attributes
Collapse/auth/login,/auth/me, and/auth/logoutinto a singlehandle { path /auth/login /auth/me /auth/logout }block. In your dev environment, runcurl -i http://localhost/auth/loginand confirm the
Set-Cookieheader has noDomain(or islocalhost) and omitsSecureover HTTP. Keeptls_insecure_skip_verifystrictly for development.internal/auth/handlers.go (2)
83-85: Good call adding GitHub email scope.This enables fetching primary and secondary emails where available.
326-339: Re-BeginAuth on callback without persisted session.This works for OAuth2 GitHub flows, but confirm it’s supported and won’t break if you add providers requiring persisted session state (PKCE/verifier).
frontend/src/components/auth/LoginForm.jsx (1)
36-49: Nice UX: success toast on login.Clean, minimal feedback; flow remains consistent with context updates.
deployments/manifest.yml (1)
331-333: Double-check session timeout vs TTL semantics.
POD_SESSION_TIMEOUT=10minutes whilePOD_SESSION_TTL=6minutes may terminate pods before sessions end.Please confirm the app’s behavior; if TTL is “grace after idle,” ensure
TTL >= TIMEOUT.
| - headers: | ||
| ":authority": | ||
| exact: "api.kubebrowse1.ghcat.tech" | ||
| route: |
There was a problem hiding this comment.
Host/authority mismatch will prevent routing.
You match on authority api.kubebrowse1.ghcat.tech, but:
- Gateway hosts only include
kubebrowse1.ghcat.tech. - VirtualService hosts only include
kubebrowse1.ghcat.tech.
Requests to api. will not match the Gateway/VS. Add api.kubebrowse1.ghcat.tech (and app. below) to both Gateway servers.hosts and VirtualService.spec.hosts, and ensure the cert covers all SANs.
Apply this diff (prefer the dedicated field over headers):
- - headers:
- ":authority":
- exact: "api.kubebrowse1.ghcat.tech"
+ - authority:
+ exact: "api.kubebrowse1.ghcat.tech"Additionally (outside the changed hunk), update hosts:
# Gateway.servers[*].hosts (both 80 and 443)
hosts:
- kubebrowse1.ghcat.tech
- api.kubebrowse1.ghcat.tech
- app.kubebrowse1.ghcat.tech
# VirtualService.spec.hosts
hosts:
- kubebrowse1.ghcat.tech
- api.kubebrowse1.ghcat.tech
- app.kubebrowse1.ghcat.tech🤖 Prompt for AI Agents
In deployments/isito.yml around lines 40 to 43, the route match uses a header
":authority" exact "api.kubebrowse1.ghcat.tech" but the Gateway.servers.hosts
and VirtualService.spec.hosts do not include the api.* and app.* hostnames,
causing request mismatches; replace the header-based authority match with the
dedicated host field (use route.hosts / VirtualService.spec.hosts) and add
api.kubebrowse1.ghcat.tech and app.kubebrowse1.ghcat.tech to both
Gateway.servers[*].hosts (for both 80 and 443) and to VirtualService.spec.hosts,
and ensure the TLS certificate includes all SANs (kubebrowse1.ghcat.tech,
api.kubebrowse1.ghcat.tech, app.kubebrowse1.ghcat.tech).
| initContainers: | ||
| - name: cert-generator | ||
| image: alpine:3.20 | ||
| command: ['sh', '-c'] | ||
| args: | ||
| - | | ||
| apk add --no-cache openssl | ||
| mkdir -p /app/certs | ||
| echo "Generating self-signed certificates..." | ||
| openssl req -x509 -nodes -days 365 -newkey rsa:2048 \ | ||
| -keyout /app/certs/private.key \ | ||
| -out /app/certs/certificate.crt \ | ||
| -subj "/C=US/ST=California/L=San Francisco/O=My Company/CN=mydomain.com" | ||
| echo "Certificates generated successfully" | ||
| ls -la /app/certs/ | ||
| volumeMounts: | ||
| - name: certs-volume | ||
| mountPath: /app/certs | ||
| containers: |
There was a problem hiding this comment.
Don’t generate self-signed TLS in initContainer for prod; use cert-manager/Issuers or terminate at Istio.
This contradicts the PR goal and creates ephemeral, untrusted certs on each pod restart.
If TLS terminates at the Istio Gateway, remove pod TLS entirely. Otherwise, mount a cert-manager-managed Secret.
Apply (remove initContainer and cert volume):
serviceAccountName: browser-sandbox-sa
- initContainers:
- - name: cert-generator
- image: alpine:3.20
- command: ['sh', '-c']
- args:
- - |
- apk add --no-cache openssl
- mkdir -p /app/certs
- echo "Generating self-signed certificates..."
- openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
- -keyout /app/certs/private.key \
- -out /app/certs/certificate.crt \
- -subj "/C=US/ST=California/L=San Francisco/O=My Company/CN=mydomain.com"
- echo "Certificates generated successfully"
- ls -la /app/certs/
- volumeMounts:
- - name: certs-volume
- mountPath: /app/certsAnd also remove the related env, volumeMount, and volume below (see next comments).
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| initContainers: | |
| - name: cert-generator | |
| image: alpine:3.20 | |
| command: ['sh', '-c'] | |
| args: | |
| - | | |
| apk add --no-cache openssl | |
| mkdir -p /app/certs | |
| echo "Generating self-signed certificates..." | |
| openssl req -x509 -nodes -days 365 -newkey rsa:2048 \ | |
| -keyout /app/certs/private.key \ | |
| -out /app/certs/certificate.crt \ | |
| -subj "/C=US/ST=California/L=San Francisco/O=My Company/CN=mydomain.com" | |
| echo "Certificates generated successfully" | |
| ls -la /app/certs/ | |
| volumeMounts: | |
| - name: certs-volume | |
| mountPath: /app/certs | |
| containers: | |
| serviceAccountName: browser-sandbox-sa | |
| containers: |
🤖 Prompt for AI Agents
In deployments/manifest.yml around lines 256-274, the initContainer generates
ephemeral self-signed TLS certs which is inappropriate for production and
contradicts the PR goal; remove the entire initContainer block and the
certs-volume definition/mounts, then remove any related env vars, container
volumeMount entries, and volumes that reference certs-volume elsewhere in the
file so the pod no longer manages TLS; if TLS is terminated by Istio ensure no
pod TLS config remains, otherwise replace these with a reference to a
cert-manager-managed Secret (mounted via a Secret volume) or document that TLS
is terminated at the Gateway.
| value: "https://localhost:4567/auth/oauth/github/callback" | ||
| - name: SESSION_SECRET | ||
| value: "McX0xoYlJYpO0EGdUZC6aJxVh2rKYrXZM7SPMpVREeXMXoMBI20v90PICD-LfEn7jr07c5vH2CWcKzjl8hoNvQ" | ||
| - name: FRONTEND_URL | ||
| value: "http://localhost:5173" | ||
| value: "http://localhost:8080" | ||
| - name: ENVIRONMENT | ||
| value: "development" |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Fix OAuth callback and environment for prod.
GITHUB_CALLBACK_URL=https://localhost:4567/... and ENVIRONMENT=development will not work in prod. Callback must be your public domain and environment should be production.
Apply:
- - name: GITHUB_CALLBACK_URL
- value: "https://localhost:4567/auth/oauth/github/callback"
+ - name: GITHUB_CALLBACK_URL
+ value: "https://<your-domain>/auth/oauth/github/callback"
...
- - name: ENVIRONMENT
- value: "development"
+ - name: ENVIRONMENT
+ value: "production"Also ensure the GitHub OAuth app is updated with the same callback URL.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| value: "https://localhost:4567/auth/oauth/github/callback" | |
| - name: SESSION_SECRET | |
| value: "McX0xoYlJYpO0EGdUZC6aJxVh2rKYrXZM7SPMpVREeXMXoMBI20v90PICD-LfEn7jr07c5vH2CWcKzjl8hoNvQ" | |
| - name: FRONTEND_URL | |
| value: "http://localhost:5173" | |
| value: "http://localhost:8080" | |
| - name: ENVIRONMENT | |
| value: "development" | |
| env: | |
| - name: GITHUB_CALLBACK_URL | |
| value: "https://<your-domain>/auth/oauth/github/callback" | |
| - name: SESSION_SECRET | |
| value: "McX0xoYlJYpO0EGdUZC6aJxVh2rKYrXZM7SPMpVREeXMXoMBI20v90PICD-LfEn7jr07c5vH2CWcKzjl8hoNvQ" | |
| - name: FRONTEND_URL | |
| value: "http://localhost:8080" | |
| - name: ENVIRONMENT | |
| value: "production" |
🤖 Prompt for AI Agents
In deployments/manifest.yml around lines 345 to 351 the OAuth callback and
environment are set for local development; change the GITHUB_CALLBACK_URL value
to your public production callback URL
(https://your-production-domain/auth/oauth/github/callback) and set ENVIRONMENT
to "production", removing any hard-coded local URLs and development secrets;
after updating the manifest, ensure the GitHub OAuth app's callback/redirect URL
is updated to exactly match the production callback URL and rotate any secrets
if they were development-only.
| - name: CERT_PATH | ||
| value: "/app/certs/certificate.crt" | ||
| - name: CERT_KEY_PATH | ||
| value: "/app/certs/private.key" | ||
| volumeMounts: | ||
| - name: certs-volume | ||
| mountPath: /app/certs | ||
| securityContext: {} | ||
| volumes: | ||
| - name: certs-volume | ||
| emptyDir: {} | ||
| --- |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Remove CERT_PATH/CERT_KEY and the cert volume if TLS terminates at the gateway. Or replace with cert-manager Secret.
Mounting ephemeral certs is brittle.
Option A (Gateway TLS termination, recommended):
- - name: CERT_PATH
- value: "/app/certs/certificate.crt"
- - name: CERT_KEY_PATH
- value: "/app/certs/private.key"
- volumeMounts:
- - name: certs-volume
- mountPath: /app/certs
...
- volumes:
- - name: certs-volume
- emptyDir: {}Option B (pod TLS via cert-manager):
- Create Certificate with
secretName: api-tls. - Mount Secret:
+ volumeMounts:
+ - name: api-tls
+ mountPath: /app/certs
+ readOnly: true
...
+ volumes:
+ - name: api-tls
+ secret:
+ secretName: api-tlsAnd keep CERT_PATH=/app/certs/tls.crt, CERT_KEY_PATH=/app/certs/tls.key.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: CERT_PATH | |
| value: "/app/certs/certificate.crt" | |
| - name: CERT_KEY_PATH | |
| value: "/app/certs/private.key" | |
| volumeMounts: | |
| - name: certs-volume | |
| mountPath: /app/certs | |
| securityContext: {} | |
| volumes: | |
| - name: certs-volume | |
| emptyDir: {} | |
| --- | |
| # (removed CERT_PATH/CERT_KEY and certs-volume since TLS now terminates at the gateway) | |
| securityContext: {} | |
| --- |
🧰 Tools
🪛 Checkov (3.2.334)
[MEDIUM] 240-373: Containers should not run with allowPrivilegeEscalation
(CKV_K8S_20)
[MEDIUM] 240-373: Minimize the admission of root containers
(CKV_K8S_23)
| case AUTH_ACTIONS.SET_USER: { | ||
| // Check if user object has meaningful data | ||
| const hasUserData = action.payload && ( | ||
| action.payload.id || | ||
| action.payload.email || | ||
| action.payload.username || | ||
| (action.payload.name && action.payload.name.trim() !== '') | ||
| ); | ||
|
|
||
| return { | ||
| ...state, | ||
| user: action.payload, | ||
| isAuthenticated: !!action.payload, | ||
| isAuthenticated: hasUserData, | ||
| isLoading: false, | ||
| error: null | ||
| }; | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Tighten isAuthenticated predicate.
Rely on stable identifiers (id/email). A non-empty name shouldn’t imply auth.
Apply:
- case AUTH_ACTIONS.SET_USER: {
- // Check if user object has meaningful data
- const hasUserData = action.payload && (
- action.payload.id ||
- action.payload.email ||
- action.payload.username ||
- (action.payload.name && action.payload.name.trim() !== '')
- );
+ case AUTH_ACTIONS.SET_USER: {
+ const hasUserData = Boolean(
+ action.payload && (action.payload.id || action.payload.email)
+ );📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| case AUTH_ACTIONS.SET_USER: { | |
| // Check if user object has meaningful data | |
| const hasUserData = action.payload && ( | |
| action.payload.id || | |
| action.payload.email || | |
| action.payload.username || | |
| (action.payload.name && action.payload.name.trim() !== '') | |
| ); | |
| return { | |
| ...state, | |
| user: action.payload, | |
| isAuthenticated: !!action.payload, | |
| isAuthenticated: hasUserData, | |
| isLoading: false, | |
| error: null | |
| }; | |
| } | |
| case AUTH_ACTIONS.SET_USER: { | |
| const hasUserData = Boolean( | |
| action.payload && (action.payload.id || action.payload.email) | |
| ); | |
| return { | |
| ...state, | |
| user: action.payload, | |
| isAuthenticated: hasUserData, | |
| isLoading: false, | |
| error: null | |
| }; | |
| } |
🤖 Prompt for AI Agents
In frontend/src/context/AuthContext.jsx around lines 31 to 47, the current
isAuthenticated check treats a non-empty name as proof of authentication;
tighten this to rely only on stable identifiers by changing the hasUserData
predicate to check for payload.id or a non-empty payload.email (e.g.,
Boolean(action.payload?.id) || Boolean(action.payload?.email &&
action.payload.email.trim() !== '')). Keep assigning state.user = action.payload
but set isAuthenticated based solely on that new predicate, and leave isLoading
and error updates as-is.
| // If email is empty, use a fallback email based on the GitHub username | ||
| if user.Email == "" { | ||
| logrus.Warnf("User email is empty, using fallback email for user: %s", user.UserID) | ||
| // Create a fallback email using the GitHub username | ||
| if user.NickName != "" { | ||
| user.Email = user.NickName + "@github.local" | ||
| } else if user.UserID != "" { | ||
| user.Email = user.UserID + "@github.local" | ||
| } else { | ||
| user.Email = "user@github.local" | ||
| } | ||
| logrus.Infof("Using fallback email: %s", user.Email) | ||
| } | ||
|
|
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Fallback email: avoid .local; mark as synthetic and ensure uniqueness.
.local is mDNS-reserved and may confuse systems. Prefer a reserved TLD and an explicit marker to avoid treating it as a deliverable/verified email. Also consider ensuring DB uniqueness isn't based solely on email.
Apply:
- if user.NickName != "" {
- user.Email = user.NickName + "@github.local"
- } else if user.UserID != "" {
- user.Email = user.UserID + "@github.local"
- } else {
- user.Email = "user@github.local"
- }
+ // Use reserved TLD and explicit synthetic marker
+ if user.NickName != "" {
+ user.Email = user.NickName + "+synthetic@github.invalid"
+ } else if user.UserID != "" {
+ user.Email = "github-" + user.UserID + "+synthetic@users.invalid"
+ } else {
+ user.Email = "synthetic-user@auth.invalid"
+ }
+ // Optionally: surface a flag via Service so downstream never sends mail to synthetic addressesPlease confirm CreateOrUpdateOAuthUser does not rely on email uniqueness alone and that synthetic emails are never used for outbound email.
🏁 Script executed:
#!/bin/bash
# Locate CreateOrUpdateOAuthUser definition
rg -nP --type=go 'func CreateOrUpdateOAuthUser' -n .
# Show 50 lines around the definition to inspect lookup logic
rg -nP --type=go 'func CreateOrUpdateOAuthUser.*{' -C50 .
# Search within that function for usage of “email” in lookups or updates
rg -nP --type=go -C10 'email' -n $(rg -l --type=go 'func CreateOrUpdateOAuthUser' -n .)
# Check for any unique index on email in DB or model tags
rg -nP 'unique.*email' -n .Length of output: 73420
Update synthetic fallback email format
in internal/auth/handlers.go (≈lines 360–373) and internal/auth/service.go (≈lines 153–162):
- Use a reserved TLD (e.g.
.invalid) instead of.local. - Append an explicit marker (
+synthetic). - Incorporate the OAuth provider into the local-part or domain to avoid collisions (e.g.
github-<id>+synthetic@users.invalid).
Theusers.emailcolumn has a unique constraint at the DB schema level, andCreateOrUpdateOAuthUserchecks by provider first then by email, so synthetic addresses will never be used for outbound emails.
🤖 Prompt for AI Agents
internal/auth/handlers.go around lines 360–373 and internal/auth/service.go
around lines 153–162: the current synthetic fallback email uses the ".local" TLD
and a simple username which can collide with real addresses; change the format
to use a reserved TLD (".invalid"), incorporate the OAuth provider and a unique
identifier into the local-part, and append a "+synthetic" marker (for example:
"github-<userid>+synthetic@users.invalid" or
"github-<nickname>+synthetic@users.invalid" depending on available fields);
update the assignment logic to prefer nickName then userID for uniqueness,
ensure logging reflects the new address, and make sure this new synthetic
pattern aligns with CreateOrUpdateOAuthUser’s provider-first lookup so it won’t
be used for outbound emails.
| // If email is empty, create a fallback email | ||
| if email == "" { | ||
| if username != "" { | ||
| email = username + "@github.local" | ||
| } else if providerID != "" { | ||
| email = providerID + "@github.local" | ||
| } else { | ||
| email = "user@github.local" | ||
| } | ||
| logrus.Infof("Using fallback email for OAuth user: %s", email) | ||
| } | ||
|
|
There was a problem hiding this comment.
Fallback email can link accounts incorrectly; make it deterministic and provider-scoped.
Using username risks collisions; then GetUserByEmail may attach the wrong account. Prefer a provider+providerID-based fallback and mark it as synthetic in-flow.
Apply this diff:
- // If email is empty, create a fallback email
- if email == "" {
- if username != "" {
- email = username + "@github.local"
- } else if providerID != "" {
- email = providerID + "@github.local"
- } else {
- email = "user@github.local"
- }
- logrus.Infof("Using fallback email for OAuth user: %s", email)
- }
+ // If email is empty, create a provider-scoped fallback email (synthetic)
+ isFallback := false
+ if email == "" {
+ isFallback = true
+ if providerID != "" {
+ email = fmt.Sprintf("%s@%s.oauth.local", providerID, provider)
+ } else if username != "" {
+ email = fmt.Sprintf("%s@%s.oauth.local", username, provider)
+ } else {
+ email = fmt.Sprintf("user-%s@%s.oauth.local", uuid.NewString(), provider)
+ }
+ logrus.Debug("Using fallback email for OAuth user")
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // If email is empty, create a fallback email | |
| if email == "" { | |
| if username != "" { | |
| email = username + "@github.local" | |
| } else if providerID != "" { | |
| email = providerID + "@github.local" | |
| } else { | |
| email = "user@github.local" | |
| } | |
| logrus.Infof("Using fallback email for OAuth user: %s", email) | |
| } | |
| // If email is empty, create a provider-scoped fallback email (synthetic) | |
| isFallback := false | |
| if email == "" { | |
| isFallback = true | |
| if providerID != "" { | |
| email = fmt.Sprintf("%s@%s.oauth.local", providerID, provider) | |
| } else if username != "" { | |
| email = fmt.Sprintf("%s@%s.oauth.local", username, provider) | |
| } else { | |
| email = fmt.Sprintf("user-%s@%s.oauth.local", uuid.NewString(), provider) | |
| } | |
| logrus.Debug("Using fallback email for OAuth user") | |
| } |
🤖 Prompt for AI Agents
In internal/auth/service.go around lines 153-164, the fallback email logic must
stop using plain username (risking collisions) and instead build a
deterministic, provider-scoped synthetic email from the provider and providerID;
change the branch to prefer provider + "+" + providerID + "@<provider>.local"
(or provider + "+" + username if providerID is empty), mark the account/email as
synthetic (set/add the appropriate synthetic flag/metadata or append a clear
"+synthetic" token into the local-part) so downstream GetUserByEmail won't
incorrectly link accounts, and update the log message to indicate this is a
synthetic provider-scoped fallback email.
| if err == nil { | ||
| // User exists, return it | ||
| // User exists, update email if it's empty | ||
| if existingUser.Email == "" { | ||
| logrus.Infof("Updating empty email for existing user %s to %s", existingUser.ID, email) | ||
| // Update the user's email | ||
| _, updateErr := s.db.UpdateUser(s.ctx, sqlc.UpdateUserParams{ | ||
| ID: existingUser.ID, | ||
| Username: existingUser.Username, | ||
| Email: email, | ||
| }) | ||
| if updateErr != nil { | ||
| logrus.Errorf("Failed to update user email: %v", updateErr) | ||
| } else { | ||
| // Refresh the user data | ||
| existingUser, err = s.db.GetUser(s.ctx, existingUser.ID) | ||
| if err != nil { | ||
| logrus.Errorf("Failed to refresh user data: %v", err) | ||
| } | ||
| } | ||
| } | ||
| return s.convertDBUser(existingUser), nil |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Avoid email-based linking when the email is synthetic.
If the email was just synthesized, skip GetUserByEmail linking to prevent cross-account attachment via collisions.
Apply this diff:
- // Check if user exists with same email
- emailUser, err := s.db.GetUserByEmail(s.ctx, email)
- if err == nil {
- // Update existing user with OAuth info
- // For now, just return the existing user
- return s.convertDBUser(emailUser), nil
- }
+ // Check if user exists with same email (only when not using a synthetic fallback)
+ if !isFallback {
+ emailUser, err := s.db.GetUserByEmail(s.ctx, email)
+ if err == nil {
+ return s.convertDBUser(emailUser), nil
+ }
+ }Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
internal/auth/service.go lines 171-191: the current logic updates/links users
based on an email match even when the email was synthesized; to fix this, add a
guard so synthesized emails never trigger GetUserByEmail-based linking — either
pass an explicit boolean (e.g., emailSynthesized) into this flow or implement an
IsSyntheticEmail(email) helper and check it before calling s.db.GetUserByEmail;
if the email is synthetic, skip the GetUserByEmail branch entirely (do not
update existing user email or refresh/return that user) and proceed as if no
email match was found so no cross-account linking can occur.
| // Ensure email is not empty - if it is, try to get fresh user data | ||
| email := dbSession.Email | ||
| if email == "" { | ||
| logrus.Warnf("ValidateSession: Email is empty for user %s, fetching fresh user data", dbSession.UserID) | ||
| freshUser, err := s.db.GetUser(s.ctx, dbSession.UserID) | ||
| if err != nil { | ||
| logrus.Errorf("ValidateSession: Failed to get fresh user data: %v", err) | ||
| } else { | ||
| email = freshUser.Email | ||
| logrus.Infof("ValidateSession: Fresh user data - Email: %s", email) | ||
| } | ||
| } | ||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion
Guarantee non-empty session email.
If fresh DB still has empty email, synthesize a provider-scoped fallback so downstream code never sees empty.
Apply this diff:
email := dbSession.Email
if email == "" {
logrus.Warnf("ValidateSession: Email is empty for user %s, fetching fresh user data", dbSession.UserID)
freshUser, err := s.db.GetUser(s.ctx, dbSession.UserID)
if err != nil {
logrus.Errorf("ValidateSession: Failed to get fresh user data: %v", err)
} else {
email = freshUser.Email
logrus.Infof("ValidateSession: Fresh user data - Email: %s", email)
}
+ if email == "" {
+ // Final fallback to ensure non-empty
+ email = fmt.Sprintf("%s@%s.oauth.local", dbSession.UserID.String(), dbSession.Provider.String)
+ logrus.Debug("ValidateSession: Using synthetic fallback email in session")
+ }
}Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In internal/auth/service.go around lines 337 to 349, after attempting to refresh
the user's email from the DB, ensure you synthesize a non-empty provider-scoped
fallback if the freshUser.Email is still empty: derive a deterministic fallback
like "<userID>@<provider>" (use the session's provider field or other provider
identifier available on dbSession), assign it to email, and log that a fallback
was synthesized; keep the existing DB-refresh error handling and only synthesize
the fallback when both dbSession.Email and freshUser.Email are empty so
downstream code never sees an empty email.
| live_update=[ | ||
| sync('./.tilt/guac', '/app/guac'), | ||
| sync('./templates', '/app/templates'), | ||
| sync('./certs', '/app/certs'), |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Avoid syncing /app/certs; it can overwrite init-generated certs at runtime.
The initContainer is responsible for generating certs. Live-update syncing this path risks clobbering keys/certs and breaking TLS.
Apply this diff:
- sync('./certs', '/app/certs'),
+ # Avoid syncing certs; init container owns /app/certs
+ # sync('./certs', '/app/certs'),📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| sync('./certs', '/app/certs'), | |
| # Avoid syncing certs; init container owns /app/certs | |
| # sync('./certs', '/app/certs'), |
🤖 Prompt for AI Agents
In Tiltfile around line 28, remove the live-update sync of './certs' to
'/app/certs' because it can overwrite initContainer-generated certificates at
runtime; instead either delete this sync entry entirely or change it to sync
local certs to a different non-production path (e.g., '/app/dev-certs') and
update any mounts or code that reference the alternate path so production
init-generated certs are never clobbered.
Summary by CodeRabbit