Skip to content

Refactor sandbox.writeFile to handle large files#505

Draft
aron-cf wants to merge 10 commits intomainfrom
sbx-1
Draft

Refactor sandbox.writeFile to handle large files#505
aron-cf wants to merge 10 commits intomainfrom
sbx-1

Conversation

@aron-cf
Copy link
Copy Markdown
Contributor

@aron-cf aron-cf commented Mar 20, 2026

What changed

writeFile now accepts ReadableStream<Uint8Array>

The public writeFile API has been expanded to accept string | ReadableStream<Uint8Array> as content, removing the previous 32 MiB size limit. Strings continue to work exactly as before. Passing a ReadableStream streams bytes directly to disk with no buffering and no size cap.

export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    const sandbox = getSandbox(env.Sandbox, 'stream-upload');

    // Write file directly, no string needed.
    sandbox.writeFile("/workspace/upload.bin", req.body);

    // Write string as before
    sandbox.writeFile("/workspace/upload.bin", "hello world");
  }
};

How the size limit is bypassed

The 32 MiB limit previously came from Cloudflare's JSRPC serialization at the Worker → Durable Object boundary. writeFile calls are now intercepted at the getSandbox() proxy layer and converted to byte-oriented ReadableStreams before crossing the RPC boundary — Cloudflare's RPC infrastructure streams these without buffering, bypassing the limit. On the container side, files are written incrementally chunk-by-chunk using Bun.file().writer(), keeping memory usage bounded regardless of file size.

See docs for details: https://developers.cloudflare.com/workers/runtime-apis/rpc/#readablestream-writeablestream-request-and-response

Thewebsocket transport

This PR changes the websocket transport to use the HTTP protocol for writing a file. This avoids the need to buffer the file into memory to send over the socket as a base64 encoded string. This makes the implementation more efficient but the transport is no longer pure websockets.

stream-upload example

A new example demonstrating the writeFile streaming API with a full upload/download integrity check. Includes a browser UI and a CLI test script.

To run:

cd examples/stream-upload
npm install
npm run dev

Open http://localhost:8787, pick a file, and click "Upload & Verify" — the UI computes SHA-256 of the original and downloaded file and confirms they match.

CLI test script:

# 35 MB random file against localhost:8787
./test-upload.sh

# Custom server and size
./test-upload.sh http://localhost:8788 50

Generates a random binary file, uploads it, downloads it back, and compares SHA-256 hashes — printing PASS or FAIL with sizes and hashes.

Notable changes

  • The encoding option on writeFile is deprecated

Test coverage

  • SDK (file-client.test.ts): writeFile with ReadableStream — successful write, error handling, network errors
  • Container handler (file-handler.test.ts): missing path param, missing body, successful stream write, service error propagation; merged handleWrite and handleWriteStream test cases
  • Container service (file-service.test.ts): path validation, relative path resolution, empty stream, multi-chunk streams, write failure; removed encoding-specific tests (no longer handled at this layer)
  • 577 container tests and 530 SDK tests pass

Open with Devin

@changeset-bot
Copy link
Copy Markdown

changeset-bot bot commented Mar 20, 2026

🦋 Changeset detected

Latest commit: 2406433

The changes in this PR will be included in the next version bump.

This PR includes changesets to release 1 package
Name Type
@cloudflare/sandbox Patch

Not sure what this means? Click here to learn what changesets are.

Click here if you're a maintainer who wants to add another changeset to this PR

aron-cf added 3 commits March 20, 2026 13:45
The `sandbox.writeFile()` endpoint now supports files of any size. We
achieve this by using a byte stream which bypasses the Workers 32mb RPC
file size limit[1].

This means that the rest of the tunneling through to the sandbox agent
can just proxy the request body rather than managing string conversion.

The sandbox agent then writes the raw bytes to disk. This simplifies
the entire flow.

[1]: https://developers.cloudflare.com/workers/runtime-apis/rpc/#readablestream-writeablestream-request-and-response
Copy link
Copy Markdown
Contributor

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Devin Review found 2 new potential issues.

View 10 additional findings in Devin Review.

Open in Devin Review

@pkg-pr-new
Copy link
Copy Markdown

pkg-pr-new bot commented Mar 20, 2026

Open in StackBlitz

npm i https://pkg.pr.new/cloudflare/sandbox-sdk/@cloudflare/sandbox@505

commit: 2406433

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Mar 20, 2026

🐳 Docker Images Published

Variant Image
Default cloudflare/sandbox:0.0.0-pr-505-2406433
Python cloudflare/sandbox:0.0.0-pr-505-2406433-python
OpenCode cloudflare/sandbox:0.0.0-pr-505-2406433-opencode
Musl cloudflare/sandbox:0.0.0-pr-505-2406433-musl
Desktop cloudflare/sandbox:0.0.0-pr-505-2406433-desktop

Usage:

FROM cloudflare/sandbox:0.0.0-pr-505-2406433

Version: 0.0.0-pr-505-2406433


📦 Standalone Binary

For arbitrary Dockerfiles:

COPY --from=cloudflare/sandbox:0.0.0-pr-505-2406433 /container-server/sandbox /sandbox
ENTRYPOINT ["/sandbox"]

Download via GitHub CLI:

gh run download 23446359129 -n sandbox-binary

Extract from Docker:

docker run --rm cloudflare/sandbox:0.0.0-pr-505-2406433 cat /container-server/sandbox > sandbox && chmod +x sandbox

whoiskatrin
whoiskatrin previously approved these changes Mar 20, 2026
@aron-cf aron-cf marked this pull request as draft March 20, 2026 15:22
For websockets we now buffer any ReadableStream into memory. This is
pretty much what we were doing before. It makes the HTTP transport more
efficient though and we need to reconsider transports in the near
future.

For retries we now check the sandbox is healthy _before_ we attempt to
start writing the file. We also gracefully handle retries if the stream
has been consumed by returning the 503 response immediately.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants