Conversation
|
Resolves #458 |
WalkthroughThe pull request modifies Possibly related PRs
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (2)
app/utils/userMetrics.ts (2)
21-33: Adjusted transcript fallback logic looks correct; consider DRY-ing it upThe new
adjustedTranscript-first logic incalculateTotalWordsaligns with the PR goal and should prevent word-count drops on sync. ThetextToCount = adjustedTranscript || transcriptpattern and word-splitting look solid.You now repeat essentially the same selection logic here and in
calculateAverageWPM. If this pattern stabilizes, consider a small helper to centralize it and avoid drift, e.g.:const getInteractionTextToCount = (interaction: Interaction): string | undefined => { const adjustedTranscript = interaction.llm_output?.adjustedTranscript?.trim() if (adjustedTranscript) return adjustedTranscript const transcript = interaction.asr_output?.transcript?.trim() return transcript || undefined }Then both metrics can call this helper instead of duplicating the
adjustedTranscript/transcripthandling.
59-64: Average WPM text selection/filtering matches new behavior; minor simplification possibleUsing
(adjustedTranscript || transcript) && interaction.duration_msin the filter and then reusingtextToCount = adjustedTranscript || transcriptinside the loop is consistent withcalculateTotalWordsand correctly prioritizes LLM output while ensuring duration is present.Given the filter already enforces both “has text” and “has duration”, the inner guard:
if (textToCount && interaction.duration_ms) { … }is effectively redundant; you can safely assume both are defined in this loop and simplify the conditional (or drop it entirely) to keep the hot path a bit cleaner.
Also applies to: 71-85
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
app/utils/userMetrics.ts(3 hunks)
🧰 Additional context used
📓 Path-based instructions (8)
**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Always prefer console commands over log commands. Use
console.loginstead oflog.info
Files:
app/utils/userMetrics.ts
**/*.{js,jsx,ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/always.mdc)
Never use empty catch statements
Files:
app/utils/userMetrics.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/code-conventions.mdc)
**/*.{ts,tsx}: Follow standard, idiomatic TypeScript coding practices for structure, naming, and types
Avoid adding comments unless they explain complex logic or non-obvious decisions; well-written, self-explanatory code is preferred
Do not add comments that merely restate what the code does
Rely on comprehensive tests to document the behavior and usage of code rather than extensive comments within the code itself
Use kebab-case when naming directories, TypeScript, and other files
**/*.{ts,tsx}: Prefer interfaces over types for object definitions
Use type for unions, intersections, and mapped types
NEVER useanyoras anytypes or coercion
Leverage TypeScript's built-in utility types
Use generics for reusable type patterns
Use PascalCase for type names and interfaces
Use camelCase for variables and functions
Use UPPER_CASE for constants
Use descriptive names with auxiliary verbs (e.g., isLoading, hasError)
Prefix interfaces for React props with 'Props' (e.g., ButtonProps)
Keep type definitions close to where they're used
Export types and interfaces from dedicated type files when shared
Co-locate component props with their components
Use explicit return types for public functions
Use arrow functions for callbacks and methods
Implement proper error handling with custom error types
Use function overloads for complex type scenarios
Prefer async/await over Promises
Prefer function declarations over function expressions
Use readonly for immutable properties
Leverage discriminated unions for type safety
Use type guards for runtime type checking
Implement proper null checking
Avoid type assertions unless necessary
Handle Promise rejections properly
Files:
app/utils/userMetrics.ts
app/**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (AGENTS.md)
Electron renderer code should be organized in the
app/directory and use React + Tailwind
Files:
app/utils/userMetrics.ts
{app,lib}/**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (AGENTS.md)
Use Prettier with 2-space indent for code formatting across TypeScript and React files
Files:
app/utils/userMetrics.ts
{app,lib,server}/**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (AGENTS.md)
{app,lib,server}/**/*.{ts,tsx,js,jsx}: Use ESLint to enforce code style and runbun run lintbefore submitting code
Always useconsolecommands instead of log commands (e.g.,console.loginstead oflog.info)
Files:
app/utils/userMetrics.ts
{app,lib}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
{app,lib}/**/*.{ts,tsx}: Components and classes usePascalCasenaming convention
Hooks and utility functions usecamelCasenaming convention
Files:
app/utils/userMetrics.ts
{app,lib,server}/**/*.{ts,tsx,js}
📄 CodeRabbit inference engine (AGENTS.md)
Constants use
SCREAMING_SNAKE_CASEnaming convention
Files:
app/utils/userMetrics.ts
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: run-tests / test
- GitHub Check: Analyze (rust)
- GitHub Check: Analyze (swift)
fixes an issue where the word count appeared to jump for a user. We have a weird user state where we don't create user interactions with LLM output, but we upsert the user interaction with the LLM output on sync. We were only counting asr_output before, so user interactions that had llm output would use the llm output as the asr_output, and then once the user synced, the interaction would upsert with the actual asr_output, and set the llm output for that interaction, thereby decreasing the word count which was the sum of all asr_outputs.
The real fix for this (as a follow up ticket) would be to get both the asr_output and llm_output in the transcription response, and use that when creating the interaction for the user locally.