Skip to content

feat(io): Add delete_stream to Storage trait#2216

Open
CTTY wants to merge 11 commits intoapache:mainfrom
CTTY:ctty/delete-stream-new
Open

feat(io): Add delete_stream to Storage trait#2216
CTTY wants to merge 11 commits intoapache:mainfrom
CTTY:ctty/delete-stream-new

Conversation

@CTTY
Copy link
Collaborator

@CTTY CTTY commented Mar 6, 2026

Which issue does this PR close?

What changes are included in this PR?

  • Add delete_stream to Storage trait to support batch delete
  • Expose delete_stream in FileIO as well

Are these changes tested?

Added uts
Addded integtests for opendal

}
}
}

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't add this for gcs because fake-gcs-server doesn't support batch delete

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I don't get your point.

Copy link
Collaborator Author

@CTTY CTTY Mar 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This works for S3 but the same test would fail for gcs, because we use fake-gcs-server for testing, and fake-gcs-server doesn't support batch delete.

fsouza/fake-gcs-server#1443


// Use relativize_path for remaining paths to avoid rebuilding the operator each time.
while let Some(path) = paths.next().await {
let relative_path = self.relativize_path(&path)?;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see some problems with this approach, what if we are deleting things as following:

s3://bucket1/a.txt
s3://bucketb/b.txt

Copy link
Collaborator Author

@CTTY CTTY Mar 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought about this as well, but I think it would be tricky to solve this. Because OpenDal's operator is tied to a bucket, and if user passes in locations from multiple buckets then we will have to create operator for each of the incoming locations --- it's gonna be slow

Also I'd say this is an acceptable limitation because most users will not have data spanned across several buckets

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't need to create operator for each location, we just need to create operator for each bucket.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've updated the implementation to include a HashMap to map from bucket name to deleter

}
}
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I don't get your point.

}

#[tokio::test]
async fn test_file_io_s3_delete_stream() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why only s3?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Support batch delete in Storage

2 participants