Conversation
…packages - Added @codemirror/search version 6.6.0. - Updated @codemirror/view to version 6.39.15 across multiple files. - Adjusted imports in code-editor.tsx to include search functionality. This update ensures compatibility with the latest features and improvements in the CodeMirror library.
…earch-no-longer-works-regression Update dependencies in pnpm-lock.yaml and package.json for @codemirro…
…--network-error fix: update Docker network creation command to specify driver for sta…
… URLs if available
…e-for-git-providers-when-using-local-lan-domains refactor: update Gitea and GitLab URL handling to prioritize internal…
…iled-due-to-email-error-450-the-html-field-contains-invalid-input fix: add error handling for volume backup notification sending
…nDeployments calls
…oyment-not-found refactor: streamline deployment cleanup by consolidating removeLastTe…
…r-volume-backups refactor: enhance volume backup path handling to ensure proper prefix…
…t-backups-all-database-dokploy-web-server-backups-are-deleted refactor: update backup file paths to include app name for better org…
…working fix: prevent doubled /v1/ suffix in Azure OpenAI-compatible URLs
…ed restore process
…h-gzip-backupsqlgz-no-such-file-or-directory-error feat: include backup file in restoreComposeBackup function for improv…
…Compose file handling
…e-alias-count-indicates-a-resource-exhaustion-attack feat: add maxAliasCount option to parse function for improved Docker …
…le components for improved session management
…lation-on-git_provider-during-github-setup-userid-is-empty---v0284 refactor: replace authClient with api.user.session.useQuery in multip…
…erviceName for S3 bucket paths
Feat/use appname on backups folder
…-when-deploying-from-docker-image fix: update success message for service deployment to reflect queued …
| if (command) { | ||
| await execAsyncRemote(serverId, command); | ||
| } |
There was a problem hiding this comment.
Uncaught execAsyncRemote can cause deployment creation to fail
execAsyncRemote(serverId, command) is not wrapped in a try/catch. Individual deployment cleanup errors are caught (lines 687-702), but this batched remote command — which runs after those loops — is not. If the remote server is unreachable or returns an error here, the exception will propagate up through removeLastTenDeployments.
This becomes a critical problem because removeLastTenDeployments is now called outside the try block in all of createDeployment, createDeploymentPreview, createDeploymentCompose, etc. A network hiccup during log cleanup would cause the entire deployment creation to fail without creating an error deployment log entry, leaving the caller with an unhandled TRPCError.
Consider wrapping this in a try/catch:
| if (command) { | |
| await execAsyncRemote(serverId, command); | |
| } | |
| if (command) { | |
| try { | |
| await execAsyncRemote(serverId, command); | |
| } catch (err) { | |
| console.error("Failed to delete old remote log files during cleanup:", err); | |
| } | |
| } |
| const backupFilesPath = `:s3:${backup.destination.bucket}/${appName}/${normalizeS3Path(backup.prefix)}`; | ||
|
|
||
| // --include "*.sql.gz" or "*.zip" ensures nothing else other than the dokploy backup files are touched by rclone | ||
| const rcloneList = `rclone lsf ${rcloneFlags.join(" ")} --include "*${backup.databaseType === "web-server" ? ".zip" : ".sql.gz"}" ${backupFilesPath}`; | ||
| // when we pipe the above command with this one, we only get the list of files we want to delete | ||
| const sortAndPickUnwantedBackups = `sort -r | tail -n +$((${backup.keepLatestCount}+1)) | xargs -I{}`; | ||
| // this command deletes the files | ||
| // to test the deletion before actually deleting we can add --dry-run before ${backupFilesPath}/{} | ||
| const rcloneDelete = `rclone delete ${rcloneFlags.join(" ")} ${backupFilesPath}/{}`; | ||
| // to test the deletion before actually deleting we can add --dry-run before ${backupFilesPath}{} | ||
| const rcloneDelete = `rclone delete ${rcloneFlags.join(" ")} ${backupFilesPath}{}`; |
There was a problem hiding this comment.
Breaking change: S3 backup path structure is not backward-compatible
The S3 backup destination path has changed from prefix/filename.sql.gz to appName/prefix/filename.sql.gz. This is a breaking change for existing users who already have backups stored in S3:
- Existing backups remain at the old path (
prefix/filename.sql.gz) - New backups are written to the new path (
appName/prefix/filename.sql.gz) - The
keepLatestNBackupscleanup routine now scans the new path exclusively, so it will never find or prune the old backups — they will accumulate indefinitely - Any user-configured restore jobs pointing to the old path would break silently
The same path change applies across all backup types: mariadb.ts, mongo.ts, mysql.ts, postgres.ts, web-server.ts, compose.ts, volume-backups/backup.ts, and volume-backups/utils.ts.
A migration guide or at least a clear note in the release notes would help users understand that their old backups are now at a different path and they should manage them manually or relocate them.
| databasePassword: string, | ||
| ) => { | ||
| return `docker exec -i $CONTAINER_ID sh -c "mongorestore --username '${databaseUser}' --password '${databasePassword}' --authenticationDatabase admin --db ${database} --archive"`; | ||
| return `docker exec -i $CONTAINER_ID sh -c "mongorestore --username '${databaseUser}' --password '${databasePassword}' --authenticationDatabase admin --db ${database} --archive --drop"`; |
There was a problem hiding this comment.
Behavioral change: --drop drops existing collections before restoring
The --drop flag tells mongorestore to drop each collection before restoring it. This is a destructive operation — any data in the target collections that was not part of the backup will be permanently deleted.
Previously, mongorestore ran in additive mode: documents from the backup were inserted, but existing documents in the database were left untouched. With --drop, a restore now fully replaces each collection.
While this may be the intended behavior for a clean restore, users upgrading from an earlier version may not expect this change and could unintentionally destroy data. Consider documenting this change prominently in the release notes.
…adding host-level service checks
…ils-to-start-when-port-8080-is-already-in-use-service-crash fix: improve port conflict detection by enhancing error messages and …
fix: enhance container metrics query to support wildcard matching for…
…trustedorigins-crashes-server-on-db-connection-failure fix: add error handling to trusted origins retrieval in admin service
This PR promotes changes from
canarytomainfor version v0.28.5.🔍 Changes Include:
✅ Pre-merge Checklist:
Greptile Summary
This release (v0.28.5) contains a broad set of fixes and improvements. However, three significant issues require attention before merging:
Deployment cleanup can block deployment creation (critical): The
removeLastTenDeploymentsfunction is called outside the try/catch in allcreateDeployment*functions. TheexecAsyncRemotecall within it (for remote servers) is not guarded by try/catch, so transient network failures during old-log cleanup will propagate and prevent the new deployment from starting with no error log recorded.Breaking S3 backup path change (high impact): All backup types now write to
appName/prefix/filenameinstead ofprefix/filename. Existing backups remain at the old path and will no longer be discovered or pruned bykeepLatestNBackups. Users upgrading will silently accumulate orphaned backups at the old S3 paths.MongoDB restore is now destructive (data loss risk): The
--dropflag was added tomongorestore, meaning each collection is dropped before being restored. This changes behavior from the previous additive restore — users may not expect this and could unintentionally lose data.Other changes (GitHub OAuth state encoding, Azure AI URL fix, CodeMirror search, Swarm overlay network fix, YAML
maxAliasCountincrease) appear well-implemented.Confidence Score: 2/5
removeLastTenDeploymentsis a critical bug that can cause deployment creation to fail silently when remote log cleanup encounters network issues. The S3 backup path change is a breaking change that will cause all existing S3 backups to become orphaned and no longer be automatically cleaned up. The MongoDB--dropflag change alters restore behavior to be destructive, risking data loss for users who don't expect this. These three issues affect core reliability and data safety.Last reviewed commit: ec7df05