v0.6.2: mothership stability, chat iframe embedding, KB upserts, new blog post#3650
v0.6.2: mothership stability, chat iframe embedding, KB upserts, new blog post#3650waleedlatif1 merged 11 commits intomainfrom
Conversation
* Fix files * Fix * Fix
…ow registry on switch
* feat(csp): allow chat UI to be embedded in iframes Mirror the existing form embed CSP pattern for chat pages: add getChatEmbedCSPPolicy() with frame-ancestors *, configure /chat/:path* headers in next.config.ts without X-Frame-Options, and early-return in proxy.ts so chat routes skip the strict runtime CSP. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * refactor(csp): extract shared getEmbedCSPPolicy helper Deduplicate getChatEmbedCSPPolicy and getFormEmbedCSPPolicy into a shared private helper to prevent future divergence. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix(logs): persist execution diagnostics markers Store last-started and last-completed block markers with finalization metadata so later read surfaces can explain how a run ended without reconstructing executor state. * fix(executor): preserve durable diagnostics ordering Await only the persistence needed to keep diagnostics durable before terminal completion while keeping callback failures from changing execution behavior. * fix(logs): preserve fallback diagnostics semantics Keep successful fallback output and accumulated cost intact while tightening progress-write draining and deduplicating trace span counting for diagnostics helpers. * fix(api): restore async execute route test mock Add the missing AuthType export to the hybrid auth mock so the async execution route test exercises the 202 queueing path instead of crashing with a 500 in CI. * fix(executor): align async block error handling * fix(logs): tighten marker ordering scope Allow same-millisecond marker writes to replace prior markers and drop the unused diagnostics read helper so this PR stays focused on persistence rather than unread foundation code. * fix(logs): remove unused finalization type guard Drop the unused helper so this PR only ships the persistence-side status types it actually uses. * fix(executor): await subflow diagnostics callbacks Ensure empty-subflow and subflow-error lifecycle callbacks participate in progress-write draining before terminal finalization while still swallowing callback failures. --------- Co-authored-by: test <test@example.com> Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai>
* feat(admin): add user search by email and ID, remove table border - Replace Load Users button with a live search input; query fires on any input - Email search uses listUsers with contains operator - User ID search (UUID format) uses admin.getUser directly for exact lookup - Remove outer border on user table that rendered white in dark mode - Reset pagination to page 0 on new search Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * fix(admin): replace live search with explicit search button - Split searchInput (controlled input) from searchQuery (committed value) so the hook only fires on Search click or Enter, not every keystroke - Gate table render on searchQuery.length > 0 to prevent stale results showing after input is cleared Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> --------- Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
* feat(knowledge): add upsert document operation to Knowledge block Add a "Create or Update" (upsert) document capability that finds an existing document by ID or filename, deletes it if found, then creates a new document and queues re-processing. Includes new tool, API route, block wiring, and typed interfaces. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(knowledge): address review comments on upsert document - Reorder create-then-delete to prevent data loss if creation fails - Move Zod validation before workflow authorization for validated input - Fix btoa stack overflow for large content using loop-based encoding Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(knowledge): guard against empty createDocumentRecords result Add safety check before accessing firstDocument to prevent TypeError and data loss if createDocumentRecords unexpectedly returns empty. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(knowledge): prevent documentId fallthrough and use byte-count limit - Use if/else so filename lookup only runs when no documentId is provided, preventing stale IDs from silently replacing unrelated documents - Check utf8 byte length instead of character count for 1MB size limit, correctly handling multi-byte characters (CJK, emoji) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(knowledge): rollback on delete failure, deduplicate sub-block IDs - Add compensating rollback: if deleteDocument throws after create succeeds, clean up the new record to prevent orphaned pending docs - Merge duplicate name/content sub-blocks into single entries with array conditions, matching the documentTags pattern Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * lint * lint Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* Include rid * Persist rid * fix ui * address comments * update types --------- Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai>
* improvement(landing): added enterprise section * make components interactive * added more things to pricing sheet * remove debug log * fix(landing): remove dead DotGrid component and fix enterprise CTA to use Link Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> --------- Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
* feat(home): resizable chat/resource panel divider * fix(home): address PR review comments - Remove aria-hidden from resize handle outer div so separator role is visible to AT - Add viewport-resize re-clamping in useMothershipResize to prevent panel exceeding max % after browser window narrows - Change default MothershipView width from 60% to 50% * refactor(home): eradicate useEffect anti-patterns per you-might-not-need-an-effect - use-chat: remove messageQueue→ref sync Effect; inline assignment like other refs - use-chat: replace activeResourceId selection Effect with useMemo (derived value, avoids extra re-render cycle; activeResourceIdRef now tracks effective value for API payloads) - use-chat: replace 3x useLayoutEffect ref-sync (processSSEStream, finalize, sendMessage) with direct render-phase assignment — consistent with existing resourcesRef pattern - user-input: fold onEditValueConsumed callback into existing render-phase guard; remove Effect - home: move isResourceAnimatingIn 400ms timer into expandResource/handleResourceEvent event handlers where setIsResourceAnimatingIn(true) is called; remove reactive Effect watcher * fix(home): revert default width to 60%, reduce max resize to 63% * improvement(react): replace useEffect anti-patterns with better React primitives * improvement(react): replace useEffect anti-patterns with better React primitives * improvement(home): use pointer events for resize handle (touch + mouse support) * fix(autosave): store idle-reset timer ref to prevent status corruption on rapid saves * fix(home): move onEditValueConsumed call out of render phase into useEffect * fix(home): add pointercancel handler; fix(settings): sync name on profile refetch * fix(home): restore cleanupRef assignment dropped during AbortController refactor
|
You have used all Bugbot PR reviews included in your free trial for your GitHub account on this workspace. To continue using Bugbot reviews, enable Bugbot for your team in the Cursor dashboard. |
|
The latest updates on your projects. Learn more about Vercel for GitHub. |
Greptile SummaryThis is a large staging PR that bundles ten separate fixes and features: mothership file-upload tracking, workspace registry corruption on switch, chat iframe embedding, durable execution diagnostics, knowledge base upsert, mothership request IDs, enterprise landing section, DB connection pool reduction, and a resizable resource panel divider. Key changes:
Confidence Score: 3/5
Important Files Changed
Sequence DiagramsequenceDiagram
participant Client
participant ExecutionCore as execution-core.ts
participant BlockExecutor as block-executor.ts
participant LogSession as LoggingSession
participant DB as Database
Client->>ExecutionCore: executeWorkflowCore()
ExecutionCore->>LogSession: initialize()
LogSession->>DB: INSERT workflow_execution_logs
loop For each block
BlockExecutor->>ExecutionCore: wrappedOnBlockStart(blockId, ...)
ExecutionCore->>LogSession: onBlockStart() [await]
LogSession->>DB: UPDATE execution_data.lastStartedBlock [async/tracked]
ExecutionCore-->>BlockExecutor: (external onBlockStart fired void)
BlockExecutor->>BlockExecutor: execute block
BlockExecutor->>ExecutionCore: wrappedOnBlockComplete(blockId, output, ...)
ExecutionCore->>LogSession: onBlockComplete() [await]
LogSession->>DB: UPDATE execution_data.lastCompletedBlock [async/tracked]
LogSession->>DB: UPDATE cost [void/tracked]
ExecutionCore-->>BlockExecutor: (external onBlockComplete fired void)
end
ExecutionCore->>LogSession: safeComplete()
LogSession->>LogSession: drainPendingProgressWrites()
LogSession->>DB: UPDATE workflow_execution_logs (status, traceSpans, finalizationPath)
Last reviewed commit: "feat(home): resizabl..." |
| }, | ||
| ...(workflowId && { workflowId }), | ||
| } | ||
|
|
||
| if (params.documentId && String(params.documentId).trim().length > 0) { | ||
| requestBody.documentId = String(params.documentId).trim() | ||
| } | ||
|
|
||
| return requestBody | ||
| }, | ||
| }, | ||
|
|
||
| transformResponse: async (response): Promise<KnowledgeUpsertDocumentResponse> => { | ||
| const result = await response.json() | ||
| const data = result.data ?? result | ||
| const documentsCreated = data.documentsCreated ?? [] | ||
| const firstDocument = documentsCreated[0] | ||
| const isUpdate = data.isUpdate ?? false | ||
| const previousDocumentId = data.previousDocumentId ?? null | ||
| const documentId = firstDocument?.documentId ?? firstDocument?.id ?? '' | ||
|
|
||
| return { | ||
| success: true, | ||
| output: { | ||
| message: isUpdate | ||
| ? 'Successfully updated document in knowledge base' | ||
| : 'Successfully created document in knowledge base', | ||
| documentId, | ||
| data: { | ||
| documentId, | ||
| documentName: firstDocument?.filename ?? 'Unknown', | ||
| type: 'document', | ||
| enabled: true, | ||
| isUpdate, | ||
| previousDocumentId, | ||
| createdAt: new Date().toISOString(), | ||
| updatedAt: new Date().toISOString(), |
There was a problem hiding this comment.
transformResponse ignores HTTP error status
If the API returns a 4xx or 5xx response (e.g. { error: 'Unauthorized' }), this function never checks response.ok. It falls through to result.data ?? result, which becomes the raw error body, documentsCreated is undefined → [], firstDocument is undefined, and documentId resolves to ''. The function then unconditionally returns { success: true, output: { documentId: '' } }, masking the failure entirely.
Compare this to how other knowledge tools handle errors – they guard on response.ok before parsing the success path. The same pattern should be applied here:
transformResponse: async (response): Promise<KnowledgeUpsertDocumentResponse> => {
const result = await response.json()
if (!response.ok) {
return {
success: false,
output: { data: {} as any, message: result.error || 'Failed to upsert document', documentId: '' },
error: result.error || `HTTP ${response.status}`,
}
}
const data = result.data ?? result
// ... rest of success path
}| ...(validatedData.documentTagsData && { | ||
| documentTagsData: validatedData.documentTagsData, | ||
| }), | ||
| }, | ||
| ], | ||
| knowledgeBaseId, | ||
| requestId | ||
| ) | ||
|
|
||
| const firstDocument = createdDocuments[0] | ||
| if (!firstDocument) { | ||
| logger.error(`[${requestId}] createDocumentRecords returned empty array unexpectedly`) | ||
| return NextResponse.json({ error: 'Failed to create document record' }, { status: 500 }) | ||
| } | ||
|
|
||
| if (existingDocumentId) { | ||
| try { | ||
| await deleteDocument(existingDocumentId, requestId) | ||
| } catch (deleteError) { | ||
| logger.error( | ||
| `[${requestId}] Failed to delete old document ${existingDocumentId}, rolling back new record`, | ||
| deleteError | ||
| ) | ||
| await deleteDocument(firstDocument.documentId, requestId).catch(() => {}) | ||
| return NextResponse.json({ error: 'Failed to replace existing document' }, { status: 500 }) | ||
| } | ||
| } | ||
|
|
There was a problem hiding this comment.
Silent rollback failure leaves orphaned records
The create-then-delete rollback silently swallows errors:
await deleteDocument(firstDocument.documentId, requestId).catch(() => {})If the rollback itself fails (e.g. transient DB error), both the old document and the newly created document record will exist in the knowledge base simultaneously. The caller receives a 500, but neither record is cleaned up, leading to duplicate documents that are invisible to normal user flows but still consume storage and can surface in search results.
Since the whole operation is logically atomic (replace), wrapping createDocumentRecords and deleteDocument in a database transaction would be the safest fix. If a transaction isn't feasible here (e.g. the service layer doesn't expose transaction contexts), at minimum the rollback failure should be logged at error level with enough context to trigger manual cleanup:
await deleteDocument(firstDocument.documentId, requestId).catch((rollbackError) => {
logger.error(
`[${requestId}] Rollback failed — orphaned document ${firstDocument.documentId} may exist`,
rollbackError
)
})| iterationContext?: IterationContext, | ||
| childWorkflowContext?: ChildWorkflowContext | ||
| ) => { | ||
| await loggingSession.onBlockComplete(blockId, blockName, blockType, output) | ||
| if (onBlockComplete) { | ||
| await onBlockComplete( | ||
| try { | ||
| await loggingSession.onBlockComplete(blockId, blockName, blockType, output) | ||
| if (onBlockComplete) { | ||
| void onBlockComplete( | ||
| blockId, | ||
| blockName, | ||
| blockType, | ||
| output, | ||
| iterationContext, | ||
| childWorkflowContext | ||
| ).catch((error) => { | ||
| logger.warn(`[${requestId}] Block completion callback failed`, { | ||
| executionId, | ||
| blockId, | ||
| blockType, | ||
| error, | ||
| }) | ||
| }) | ||
| } | ||
| } catch (error) { | ||
| logger.warn(`[${requestId}] Block completion persistence failed`, { | ||
| executionId, | ||
| blockId, | ||
| blockType, | ||
| error, | ||
| }) | ||
| } | ||
| } | ||
|
|
||
| const wrappedOnBlockStart = async ( | ||
| blockId: string, | ||
| blockName: string, | ||
| blockType: string, | ||
| executionOrder: number, | ||
| iterationContext?: IterationContext, | ||
| childWorkflowContext?: ChildWorkflowContext | ||
| ) => { | ||
| try { | ||
| await loggingSession.onBlockStart(blockId, blockName, blockType, new Date().toISOString()) | ||
| if (onBlockStart) { | ||
| void onBlockStart( | ||
| blockId, | ||
| blockName, | ||
| blockType, | ||
| executionOrder, | ||
| iterationContext, |
There was a problem hiding this comment.
External
onBlockComplete callback is now fire-and-forget
Previously onBlockComplete (the real-time SSE/streaming callback) was awaited, ensuring events were delivered before the block execution pipeline advanced. The new wrapper fires it with void ... .catch(...) — unblocking execution but giving up ordering guarantees.
In practice this means a fast-executing workflow could advance to the next block (and emit its start event) before the previous block's complete event has been flushed to the SSE client, potentially causing out-of-order events visible to the user.
If ordering is acceptable to relax here (e.g. the SSE layer has its own sequencing), this is intentional. If not, consider keeping the await for the external callback while only making loggingSession.onBlockComplete fire-and-forget:
await loggingSession.onBlockComplete(blockId, blockName, blockType, output)
if (onBlockComplete) {
await onBlockComplete(blockId, blockName, blockType, output, iterationContext, childWorkflowContext)
}| > | ||
| } | ||
| finalOutput: Record<string, unknown> | ||
| traceSpans: TraceSpan[] | ||
| workflowInput?: unknown | ||
| executionState?: SerializableExecutionState | ||
| finalizationPath: ExecutionFinalizationPath | ||
| completionFailure?: string | ||
| level?: 'info' | 'error' | ||
| status?: 'completed' | 'failed' | 'cancelled' | 'pending' |
There was a problem hiding this comment.
drainPendingProgressWrites could spin if writes keep scheduling new writes
The drain loop snapshots the current set and awaits all of them, then loops back if the set is still non-empty. This correctly handles the case where a write completes and removes itself while another is still in progress. However, if any in-flight write schedules a new write (via trackProgressWrite) before the allSettled call returns, that new write is added to pendingProgressWrites and will be caught by the next iteration.
Currently this chain terminates because persistLastStartedBlock / persistLastCompletedBlock / flushAccumulatedCost don't recursively add more writes. But as this class evolves, a write that calls another trackProgressWrite internally would create an unbounded loop.
A simpler and safer drain strategy is to await a fixed snapshot once, then return – relying on safeComplete* callers to call drain only after block execution is fully finished (which is already the case):
private async drainPendingProgressWrites(): Promise<void> {
if (this.pendingProgressWrites.size > 0) {
await Promise.allSettled(Array.from(this.pendingProgressWrites))
}
}* chore(blog): add v0.6 blog post and email broadcast scaffolding * mothership blog * turned on mothership blog * small change --------- Co-authored-by: Emir Karabeg <emirkarabeg@berkeley.edu>
Uh oh!
There was an error while loading. Please reload this page.