Skip to content

feat(api): add tables and files v1 REST API with OpenAPI docs#3422

Merged
waleedlatif1 merged 16 commits intofeat/mothership-copilotfrom
waleedlatif1/tables-rest-api
Mar 5, 2026
Merged

feat(api): add tables and files v1 REST API with OpenAPI docs#3422
waleedlatif1 merged 16 commits intofeat/mothership-copilotfrom
waleedlatif1/tables-rest-api

Conversation

@waleedlatif1
Copy link
Collaborator

Summary

  • Added v1 REST API endpoints for tables (CRUD, rows, bulk ops, upsert) and files (list, upload, download, delete)
  • Full OpenAPI 3.1.0 documentation for both APIs with curl examples
  • API key auth, rate limiting, workspace permissions on all endpoints

Type of Change

  • New feature

Testing

Tested manually

Checklist

  • Code follows project style guidelines
  • Self-reviewed my changes
  • Tests added/updated and passing
  • No new warnings introduced
  • I confirm that I have read and agree to the terms outlined in the Contributor License Agreement (CLA)

@cursor
Copy link

cursor bot commented Mar 5, 2026

PR Summary

High Risk
Introduces new public v1 endpoints for table row mutations and file upload/download and updates API key scoping/rate limiting, which are security- and data-handling sensitive. Bugs in permission/scope checks or validation could expose cross-workspace access or allow unintended writes/deletes.

Overview
Adds new v1 REST endpoints for Tables and Files. Introduces /api/v1/tables (+ detail, row CRUD/bulk ops, and upsert) and /api/v1/files (+ upload, list, download, delete) with rate limiting, workspace-scope enforcement for workspace API keys, permission checks, and audit logging for table/file actions.

Updates API docs and hardens existing table routes. Extends openapi.json and docs navigation to include Tables and Files, and refactors existing /api/table routes to delegate inserts/updates/deletes/upserts to shared @/lib/table service functions, adding stricter request parsing/validation (non-empty filters, optional limits) and more robust date serialization/error mapping.

Written by Cursor Bugbot for commit 437c332. Configure here.

@vercel
Copy link

vercel bot commented Mar 5, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

1 Skipped Deployment
Project Deployment Actions Updated (UTC)
docs Skipped Skipped Mar 5, 2026 9:06pm

Request Review

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Mar 5, 2026

Greptile Summary

This PR introduces v1 REST API endpoints for tables (full CRUD, batch operations, upsert) and files (list, upload, download, delete), backed by a refactored TableService layer, full OpenAPI 3.1.0 documentation, rate limiting, workspace-scoped API key enforcement, and audit logging.

The route-layer implementation is well-structured — JSON parse errors are guarded, empty filters are rejected via schema validation, audit events are fired, the upsert endpoint runs inside a FOR UPDATE transaction, and FormData inputs are correctly type-checked. Schema normalization, error handling, and file upload/download endpoints are all correct.

However, the service layer has three unresolved data-integrity issues under concurrent load that affect core row-mutation operations:

  • insertRow: checkUniqueConstraintsDb is called outside the FOR UPDATE transaction lock. Two concurrent inserts can both pass the uniqueness check and both commit, silently violating unique column constraints.

  • updateRow: The function has no enclosing transaction. The uniqueness check and the UPDATE statement are separate database operations, so concurrent PATCH requests on different rows can race to write the same value to a unique column.

  • deleteRowsByFilter: Returns affectedCount from a pre-delete SELECT, not from the actual DELETE. Under concurrency, the count can be inflated — unlike deleteRowsByIds, which was already fixed to use .returning().

These issues should be fixed before this API is used in production, especially since this is a public-facing API that modifies user data and no automated tests were added for the new endpoints.

Confidence Score: 2/5

  • This PR introduces real data-integrity bugs under concurrent load that can silently violate unique column constraints on production data.
  • The PR contains three TOCTOU race conditions in the core table service layer affecting critical row-mutation operations (insert, update, delete). These bugs allow concurrent requests to bypass uniqueness constraints by performing validation outside of atomic transaction boundaries. The route layer is well-implemented with proper error handling, validation, and HTTP semantics, but the service-layer bugs are showstoppers for production use of a public-facing API that modifies user data. No automated tests were added for any of the new endpoints, leaving no regression detection mechanism.
  • apps/sim/lib/table/service.ts requires immediate attention — specifically the insertRow, updateRow, and deleteRowsByFilter functions must be fixed to perform validation and mutation atomically inside transaction boundaries with appropriate row-level locks.

Sequence Diagram

sequenceDiagram
    participant ClientA as Client A (PATCH row X)
    participant ClientB as Client B (PATCH row Y)
    participant Service as updateRow()
    participant DB as Database

    ClientA->>Service: PATCH /rows/X {email: "a@b.com"}
    ClientB->>Service: PATCH /rows/Y {email: "a@b.com"}

    Service->>DB: checkUniqueConstraintsDb (for A) — no duplicate found ✓
    Service->>DB: checkUniqueConstraintsDb (for B) — no duplicate found ✓

    Note over Service,DB: No transaction — race window open

    Service->>DB: UPDATE row X SET email="a@b.com"
    Service->>DB: UPDATE row Y SET email="a@b.com"

    Note over DB: ⚠️ Both rows now have email="a@b.com"<br/>Unique constraint violated silently
Loading

Comments Outside Diff (3)

  1. apps/sim/lib/table/service.ts, line 244-285 (link)

    Unique constraint check outside the transaction lock in insertRow

    checkUniqueConstraintsDb (line 247) runs before the db.transaction(...) at line 259. The FOR UPDATE lock at lines 260–262 serializes the row-count check and the actual insert, but does not cover the uniqueness check. Under concurrency, this race is possible:

    1. Request A calls checkUniqueConstraintsDb — no duplicate found
    2. Request B calls checkUniqueConstraintsDb — no duplicate found (A hasn't inserted yet)
    3. Request A enters the transaction, acquires the lock, inserts its row
    4. Request B enters the transaction, waits for the lock, then inserts its row

    Both requests succeed, and the supposedly unique column now has two identical values.

    Fix: Move checkUniqueConstraintsDb inside the transaction, after acquiring the lock:

    const [row] = await db.transaction(async (trx) => {
      await trx.execute(sql`SELECT 1 FROM user_table_definitions WHERE id = ${data.tableId} FOR UPDATE`)
    
      // Now check uniqueness atomically, inside the lock
      const uniqueColumns = getUniqueColumns(table.schema)
      if (uniqueColumns.length > 0) {
        const uniqueValidation = await checkUniqueConstraintsDb(data.tableId, data.data, table.schema)
        if (!uniqueValidation.valid) throw new Error(uniqueValidation.errors.join(', '))
      }
    
      const [{ count: currentCount }] = await trx.select({ count: count() })...
      // ... rest of insert logic
    })
  2. apps/sim/lib/table/service.ts, line 671-723 (link)

    updateRow has no transaction — uniqueness check and update are not atomic

    updateRow fetches the existing row, checks unique constraints, and then updates — all outside any transaction (lines 676–713). Two concurrent PATCH requests targeting different rows but both writing the same value to a unique column will both pass checkUniqueConstraintsDb (since neither has committed yet) and then both update, silently violating the uniqueness guarantee.

    Additionally, since there is no transaction, if the row is deleted between the getRowById check (line 677) and the db.update(...) (line 710), the update silently succeeds (affecting 0 rows) but the function returns a fabricated row object as if the update succeeded, because it does not use .returning().

    Fix: Wrap the uniqueness check and the update in a transaction with a row-level lock:

    const updatedRow = await db.transaction(async (trx) => {
      // Lock the row to prevent concurrent modification
      const [existingRow] = await trx
        .select()
        .from(userTableRows)
        .where(eq(userTableRows.id, data.rowId))
        .for('update')
        .limit(1)
    
      if (!existingRow) throw new Error('Row not found')
    
      // Uniqueness check inside the lock
      const uniqueValidation = await checkUniqueConstraintsDb(data.tableId, data.data, table.schema, data.rowId)
      if (!uniqueValidation.valid) throw new Error(uniqueValidation.errors.join(', '))
    
      const [updated] = await trx
        .update(userTableRows)
        .set({ data: data.data, updatedAt: new Date() })
        .where(eq(userTableRows.id, data.rowId))
        .returning()
    
      return updated
    })
  3. apps/sim/lib/table/service.ts, line 867-925 (link)

    deleteRowsByFilter returns inaccurate affectedCount under concurrency

    The SELECT at line 894 and the DELETE inside the transaction at lines 903–916 are not atomic. affectedCount is returned as matchingRows.length (from the pre-delete read at line 894), but under concurrency, some of those rows may have already been deleted by another request before this transaction's DELETE runs — so the returned count is inflated.

    Note that deleteRowsByIds (the sibling function) was already fixed to use .returning() for an accurate count. The same fix should be applied here:

    const deletedIds: string[] = []
    await db.transaction(async (trx) => {
      for (let i = 0; i < rowIds.length; i += TABLE_LIMITS.DELETE_BATCH_SIZE) {
        const batch = rowIds.slice(i, i + TABLE_LIMITS.DELETE_BATCH_SIZE)
        const deleted = await trx
          .delete(userTableRows)
          .where(
            and(
              eq(userTableRows.tableId, data.tableId),
              eq(userTableRows.workspaceId, data.workspaceId),
              sql`${userTableRows.id} = ANY(ARRAY[${sql.join(batch.map((id) => sql`${id}`), sql`, `)}])`
            )
          )
          .returning({ id: userTableRows.id })
        deletedIds.push(...deleted.map((r) => r.id))
      }
    })
    return { affectedCount: deletedIds.length, affectedRowIds: deletedIds }

Last reviewed commit: 437c332

@waleedlatif1
Copy link
Collaborator Author

@cursor review

@waleedlatif1
Copy link
Collaborator Author

@greptile

@waleedlatif1
Copy link
Collaborator Author

@greptile

@waleedlatif1
Copy link
Collaborator Author

@cursor review

@waleedlatif1
Copy link
Collaborator Author

@greptile

@waleedlatif1
Copy link
Collaborator Author

@cursor review

…rors

- Add 'Unique constraint violation' to upsert error classification
- Wrap PUT/DELETE request.json() in try/catch to return 400 on malformed body
- Apply fixes to both v1 and internal routes
@waleedlatif1
Copy link
Collaborator Author

@greptile

- Wrap PATCH request.json() in try/catch for both v1 and internal routes
- Rewrite deleteRowsByIds to use .returning() for accurate deletedCount
  under concurrent requests (eliminates SELECT-then-DELETE race)
@waleedlatif1
Copy link
Collaborator Author

@greptile

@waleedlatif1
Copy link
Collaborator Author

@cursor review

- Wrap POST handler request.json() in try/catch across all table routes
- Also fix internal DELETE single-row handler
- Every request.json() in table routes now returns 400 on malformed body
- Replace unsafe `as string | null` cast with typeof check
- Prevents File object from bypassing workspaceId validation
@waleedlatif1
Copy link
Collaborator Author

@greptile

@waleedlatif1
Copy link
Collaborator Author

@cursor review

…aw()

- Use instanceof File check instead of unsafe `as File | null` cast
- Add regex validation on column name before sql.raw() interpolation
- Guard request.formData() with try/catch in file upload
- Guard all .toISOString() calls with instanceof Date checks
- Replace verifyTableWorkspace double-fetch with direct comparison
- Fix relative imports to absolute (@/app/api/table/utils)
- Fix internal list tables leaking fields via ...t spread
- Normalize schema in internal POST create table response
- Remove redundant pre-check in internal create (service handles atomically)
- Make 'maximum table limit' return 403 consistently (was 400 in internal)
- Add 'Row not found' → 404 classification in PATCH handlers
- Add NAME_PATTERN validation before sql.raw() in validation.ts
@waleedlatif1
Copy link
Collaborator Author

@greptile

@waleedlatif1
Copy link
Collaborator Author

@cursor review

Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.

@waleedlatif1 waleedlatif1 merged commit a368827 into feat/mothership-copilot Mar 5, 2026
4 checks passed
@waleedlatif1 waleedlatif1 deleted the waleedlatif1/tables-rest-api branch March 5, 2026 21:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant