Rewrite public API docs and verified examples for gem-rewrite-3#187
Rewrite public API docs and verified examples for gem-rewrite-3#187vipulnsward wants to merge 48 commits intomainfrom
Conversation
…ubbing - Fix auth/header handling, pagination, upload behavior, and README examples - Add multi-account spec coverage for per-client configuration - Scrub VCR cassettes and harden VCR filters to avoid leaking credentials
CI was failing because RuboCop runs as part of rake. - Refactor RestClient signature body computation to reduce complexity - Extract parallel upload worker to satisfy Metrics/BlockLength - Clean up VCR scrubber regexes and multi-account spec naming
Use per-example generated secret keys to prevent secret scanners from flagging test fixtures.
Replace pubkey:secret placeholders in recorded Authorization headers with a neutral placeholder to avoid triggering secret scanners.
- split v5 migration notes into MIGRATING_V5.md and link from README - remove duplicate README upload sections and correct API examples - fix examples to unwrap Uploadcare::Result where needed and use current progress payload - fix GroupClient path escaping to preserve '~' in group IDs (fixes signed REST calls) - keep FileMetadata cache consistent when operating on explicit different UUIDs
Restructure the codebase into clean layered architecture: - Internal layer (Authenticator, ErrorHandler, ThrottleHandler, UserAgent, SignatureGenerator, UploadParamsGenerator) under Internal:: - API parity layer with Api::Rest (8 endpoint classes) and Api::Upload (2 endpoint classes) providing 1:1 endpoint wrappers - Resources layer (File, Group, Project, Webhook, AddonExecution, DocumentConversion, VideoConversion, FileMetadata) as domain objects - Collections layer (Paginated, BatchResult) for list/batch responses - Operations layer (UploadRouter, MultipartUpload) for workflow logic - Client with domain accessors (files, groups, uploads, project, webhooks, addons, file_metadata, conversions) and raw API access Remove old Clients::* namespace, Uploader resource, and param utilities. Full test suite: 589 examples, 0 failures, 93% coverage.
Rewrite multipart_upload_spec.rb with 37 examples covering: - Sequential upload: chunking, presigned URLs, progress callbacks, request_options/store/metadata passthrough, custom part_size - Parallel upload: multi-threaded part upload, progress reporting, more threads than parts, worker error propagation - Edge cases: file without #size (File.size fallback), #original_filename support, unknown MIME type fallback, multipart_start/complete failures, sequential upload_part_to_url errors, excess presigned URLs Split exception/upload_error.rb into separate files per Zeitwerk convention (UploadTimeoutError, MultipartUploadError, UnknownStatusError). Remove unreachable ThreadError rescue in parallel worker. 100% line coverage on operations/multipart_upload.rb (70/70 lines). Full suite: 619 examples, 0 failures, 95.64% coverage.
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughRearchitects the SDK to a client- and resource-oriented v5: adds Uploadcare::Client, REST and Upload API clients, resource classes, upload operations (router & multipart), internal auth/error/throttle utilities, new exceptions, examples/docs, tooling updates and many test fixture changes; removes legacy Entity/Client/Param layers and updates version to 5.0.0. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor App as Application
participant Client as Uploadcare::Client
participant Rest as Uploadcare::Api::Rest
participant HTTP as Faraday/HTTP
participant UC as Uploadcare REST API
App->>Client: files.find(uuid)
Client->>Rest: files.info(uuid)
Rest->>HTTP: GET /files/{uuid} (signed headers)
HTTP->>UC: Request
UC-->>HTTP: 200/4xx JSON
HTTP-->>Rest: Response
Rest->>Client: Uploadcare::Result (wrapped)
Client-->>App: Uploadcare::Resources::File (unwrapped)
sequenceDiagram
autonumber
actor App as Application
participant Client as Uploadcare::Client
participant Router as Uploadcare::Operations::UploadRouter
participant UploadAPI as Uploadcare::Api::Upload
participant S3 as External S3
participant UC as Uploadcare Upload API
App->>Client: uploads.upload(source, &progress)
Client->>Router: upload(source)
alt multipart (big file)
Router->>UploadAPI: files.multipart_start(...)
UploadAPI->>UC: POST /multipart/start/
UC-->>UploadAPI: presigned part URLs + uuid
loop parts (concurrent)
Router->>S3: PUT presigned_url (part)
S3-->>Router: 200 (ETag)
Router-->>App: progress callback
end
Router->>UploadAPI: files.multipart_complete(uuid)
UploadAPI->>UC: POST /multipart/complete/
UC-->>UploadAPI: 200 JSON
else small / direct / url
Router->>UploadAPI: files.direct or files.from_url
UploadAPI->>UC: POST base/ or from_url/
UC-->>UploadAPI: JSON response
end
Router-->>Client: normalized Uploadcare::Result
Client-->>App: Resources::File or result data
Estimated code review effort🎯 5 (Critical) | ⏱️ ~180 minutes ✨ Finishing Touches🧪 Generate unit tests (beta)
|
There was a problem hiding this comment.
Actionable comments posted: 18
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
spec/fixtures/vcr_cassettes/upload_upload_many.yml (1)
8-12:⚠️ Potential issue | 🟡 MinorPartial credential masking - public key still exposed in request body.
While the User-Agent header is now masked with
<uploadcare_public_key>, the base64-encoded request body still contains the actual public key (5d5bb5639e3f2df33674) in theUPLOADCARE_PUB_KEYform field. This is visible when decoding the binary content.Consider whether this exposure is acceptable given that:
- Public keys are less sensitive than secret keys
- VCR cassettes are committed to the repository
If full masking is desired, you may need to regenerate this cassette with VCR's
filter_sensitive_datahook applied to the request body as well.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@spec/fixtures/vcr_cassettes/upload_upload_many.yml` around lines 8 - 12, The VCR cassette still contains the real public key in the base64-encoded form field UPLOADCARE_PUB_KEY inside upload_upload_many.yml; update the VCR configuration (where VCR.configure / filter_sensitive_data is defined, e.g., spec_helper or spec/support/vcr.rb) to add a filter_sensitive_data rule that replaces the public key value (match the raw body or the decoded form field name UPLOADCARE_PUB_KEY) with a placeholder like <uploadcare_public_key>, then regenerate the affected cassette (upload_upload_many.yml) so the request body is masked as well as the User-Agent.
🟡 Minor comments (20)
spec/TESTING.md-84-98 (1)
84-98:⚠️ Potential issue | 🟡 MinorAdd language identifier to fenced code block.
The directory structure code block should specify a language identifier for better markdown compliance and rendering. As per static analysis, markdown linters expect all fenced code blocks to have a language specified.
📝 Proposed fix
-``` +```text spec/ ├── fixtures/🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@spec/TESTING.md` around lines 84 - 98, The fenced code block showing the directory tree that starts with "spec/" should include a language identifier to satisfy markdown linters; update the opening fence from ``` to ```text (or another appropriate language like ```bash) so the block beginning with "spec/" is rendered with a language tag, leaving the block content unchanged.lib/uploadcare/signed_url_generators/akamai_generator.rb-62-65 (1)
62-65:⚠️ Potential issue | 🟡 MinorHarden
sanitized_stringwith explicit type validation.This helper currently raises
NoMethodErrorfor nil/non-string inputs, which can bubble up fromcdn_hostoracl. RaiseArgumentErrorinstead for a stable API contract.Proposed fix
def sanitized_string(string) + raise ArgumentError, 'Expected String value' unless string.is_a?(String) + string = string[1..] if string[0] == '/' string = string[0...-1] if string[-1] == '/' string.strip end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/uploadcare/signed_url_generators/akamai_generator.rb` around lines 62 - 65, sanitized_string currently fails with NoMethodError for nil/non-string inputs; add an explicit type check at the top of sanitized_string (e.g., unless string.is_a?(String) raise ArgumentError, "sanitized_string expects a String") to enforce the API contract, then proceed with the existing logic (use safe String methods like start_with?/end_with? and strip) so behavior is unchanged for valid inputs but raises ArgumentError for nil or non-String values; reference sanitized_string to locate the change..snyk-1-4 (1)
1-4:⚠️ Potential issue | 🟡 MinorUpdate the excluded path in
.snykto match the current file location.The exclusion path
lib/uploadcare/authenticator.rbdoes not exist. The authenticator file has been moved tolib/uploadcare/internal/authenticator.rb. Update the.snykconfiguration to exclude the correct path:--- exclude: code: - lib/uploadcare/internal/authenticator.rb🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.snyk around lines 1 - 4, Update the exclusion entry in the .snyk configuration to point to the authenticator's new location by replacing the current excluded path "lib/uploadcare/authenticator.rb" with "lib/uploadcare/internal/authenticator.rb" so the moved file is properly ignored by Snyk.spec/fixtures/vcr_cassettes/Upload_API_Integration/Complete_Upload_Workflow/when_uploading_from_URL/uploads_from_URL_and_polls_until_complete.yml-5-8 (1)
5-8:⚠️ Potential issue | 🟡 MinorPin the upload-from-URL fixture to an immutable asset.
This cassette bakes in a
source_urlfrom the repo’smainbranch. Re-records and live example verification will drift if that file changes or moves, so the underlying example should use a commit-pinned raw URL or another stable fixture host.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@spec/fixtures/vcr_cassettes/Upload_API_Integration/Complete_Upload_Workflow/when_uploading_from_URL/uploads_from_URL_and_polls_until_complete.yml` around lines 5 - 8, The cassette currently uses a non-immutable source_url in the request body (body.string contains source_url=https%3A%2F%2Fraw.githubusercontent.com%2F...%2Fmain%2Fspec%2Ffixtures%2Fkitten.jpeg); update that value to a commit-pinned raw URL (or other stable fixture host) so the Upload API "from_url" example is deterministic—locate the body.string entry in the YAML and replace the repo main-branch raw URL with the raw URL that includes the specific commit SHA (or point to your stable test asset).lib/uploadcare/resources/file_metadata.rb-21-24 (1)
21-24:⚠️ Potential issue | 🟡 MinorAdd fail-fast UUID validation for instance methods.
Line 21 allows
uuid ||@uuid`` to benil, which can trigger opaque downstream errors. The same pattern appears in `update`, `show`, and `delete`; validate once and raise `ArgumentError` early.Suggested refactor
+ def resolve_uuid!(uuid) + target_uuid = uuid || `@uuid` + raise ArgumentError, "uuid is required" if target_uuid.nil? || target_uuid.to_s.empty? + target_uuid + end + def index(uuid: nil, request_options: {}) + target_uuid = resolve_uuid!(uuid) response = Uploadcare::Result.unwrap( - client.api.rest.file_metadata.index(uuid: uuid || `@uuid`, request_options: request_options) + client.api.rest.file_metadata.index(uuid: target_uuid, request_options: request_options) ) `@metadata` = response if response.is_a?(Hash) self end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/uploadcare/resources/file_metadata.rb` around lines 21 - 24, Instance instance methods index, update, show, and delete currently do uuid || `@uuid` which can be nil and cause opaque downstream errors; add a fail-fast validation at the start of each method (or a small private helper used by them) that checks resolved_uuid = uuid || `@uuid` and raises ArgumentError with a clear message if resolved_uuid.nil?; update the call sites in index, update, show, and delete to use the validated resolved_uuid when invoking client.api.rest.file_metadata.* so the API never receives a nil UUID.api_examples/rest_api/delete_files_uuid_storage.rb-1-1 (1)
1-1:⚠️ Potential issue | 🟡 MinorSet execute permission for this script.
The shebang is present, but CI reports missing executable mode for this file. Please set+xto keep script execution and lint behavior consistent.Suggested fix
git update-index --chmod=+x api_examples/rest_api/delete_files_uuid_storage.rb🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api_examples/rest_api/delete_files_uuid_storage.rb` at line 1, The script has a shebang but lacks executable permission; update the file mode to executable by running git update-index --chmod=+x on the file (or chmod +x locally and commit the change) so CI recognizes it as an executable script (target: the file with the shebang at the top, delete_files_uuid_storage.rb).api_examples/rest_api/delete_groups_uuid.rb-1-1 (1)
1-1:⚠️ Potential issue | 🟡 MinorSet execute permission for this script.
CI reports this script is missing executable mode. With a shebang present, please mark it executable to avoid lint noise and invocation issues.Suggested fix
git update-index --chmod=+x api_examples/rest_api/delete_groups_uuid.rb🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api_examples/rest_api/delete_groups_uuid.rb` at line 1, The script that starts with the shebang "#!/usr/bin/env ruby" needs its executable bit set; mark the file as executable in the Git index (e.g., run git update-index --chmod=+x <the script>) or set the filesystem mode (chmod +x <the script>) so CI no longer flags missing execute permission.api_examples/rest_api/delete_files_storage.rb-1-1 (1)
1-1:⚠️ Potential issue | 🟡 MinorSet executable bit for this script file.
Line 1 uses a shebang, but CI reports this file is not executable. Please commit it with
+xso script-permission lint passes and direct invocation works.Suggested fix
# apply in repo root chmod +x api_examples/rest_api/delete_files_storage.rb git add --chmod=+x api_examples/rest_api/delete_files_storage.rb🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api_examples/rest_api/delete_files_storage.rb` at line 1, This script (api_examples/rest_api/delete_files_storage.rb) has a shebang but lacks the executable bit; mark the file as executable and commit that change so CI passes by running a file-permission update (e.g., chmod +x on delete_files_storage.rb) and then stage and commit the permission change so the repo records the executable bit.api_examples/rest_api/delete_webhooks_unsubscribe.rb-1-4 (1)
1-4:⚠️ Potential issue | 🟡 MinorMissing execute permission for script file.
The pipeline reports that this script has a shebang (
#!/usr/bin/env ruby) but lacks execute permission. Runchmod +x api_examples/rest_api/delete_webhooks_unsubscribe.rbto fix the RuboCopLint/ScriptPermissionwarning.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api_examples/rest_api/delete_webhooks_unsubscribe.rb` around lines 1 - 4, The script that starts with the shebang "#!/usr/bin/env ruby" (delete_webhooks_unsubscribe.rb) is missing execute permission; run chmod +x on that script to restore execute bit and resolve the RuboCop Lint/ScriptPermission warning.examples/url_upload.rb-31-34 (1)
31-34:⚠️ Potential issue | 🟡 MinorGuard against nil
sizeto prevent potential runtime error.If
result.sizeisnil(e.g., for certain upload types or edge cases), the division on line 33 will raiseNoMethodError. Consider a safe fallback.🛡️ Proposed fix
puts "UUID: #{result.uuid}" puts "Filename: #{result.original_filename}" - puts "Size: #{(result.size / 1024.0).round(2)} KB" + puts "Size: #{result.size ? "#{(result.size / 1024.0).round(2)} KB" : 'Unknown'}" puts "MIME type: #{result.mime_type}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/url_upload.rb` around lines 31 - 34, Guard against nil result.size before performing arithmetic: update the print for size (the expression using result.size in examples/url_upload.rb) to handle nil safely (e.g., use result.size.to_f or (result.size || 0) to compute the KB value or display a fallback like "unknown"). Locate the code that prints "Size: #{(result.size / 1024.0).round(2)} KB" and replace the division with a nil-safe expression such as (result.size.to_f / 1024.0).round(2) or conditional logic that prints "Size: unknown" when result.size is nil.examples/group_creation.rb-15-20 (1)
15-20:⚠️ Potential issue | 🟡 MinorValidate readable files, not only path existence.
File.exist?on Line 16 also returns true for directories; those then fail later at open time. Tighten this check to regular readable files for clearer failures.Suggested patch
file_paths.each do |path| - unless File.exist?(path) - puts "Error: File not found: #{path}" + unless File.file?(path) && File.readable?(path) + puts "Error: Not a readable file: #{path}" exit 1 end end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/group_creation.rb` around lines 15 - 20, The current file existence check in the file_paths.each loop uses File.exist?(path) which returns true for directories; change the validation to ensure each path is a regular readable file by replacing the check with File.file?(path) && File.readable?(path) (update the conditional inside the file_paths.each block that prints "Error: File not found: #{path}" and exits) so directories and unreadable files are rejected with a clear error.spec/fixtures/vcr_cassettes/Upload_API_Integration/Complete_Upload_Workflow/when_uploading_from_URL/handles_async_URL_upload_with_status_checking.yml-10-11 (1)
10-11:⚠️ Potential issue | 🟡 MinorFix User-Agent inconsistency: POST /from_url/ should use SDK identifier.
The cassette records
Faraday v2.14.1for the first request (POST to/from_url/) butUploadcareRuby/5.0.0/...for the second (GET to/from_url/status/). The SDK's User-Agent setup uses||=, which means if Faraday sets a default header first, it won't be overridden. This causes the public upload request to lack the SDK identifier, making it indistinguishable from a raw HTTP client. If the UA wrapping breaks for unauthenticated requests, tests relying on this cassette won't catch it.Lines 10-11 and 65-67 excerpt
User-Agent: - Faraday v2.14.1vs.
User-Agent: - UploadcareRuby/5.0.0/<uploadcare_public_key> (Ruby/4.0.1)Ensure the Upload API client applies the UploadcareRuby User-Agent to all requests, including public endpoints, or update the cassette to reflect the intended behavior if this inconsistency is accepted.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@spec/fixtures/vcr_cassettes/Upload_API_Integration/Complete_Upload_Workflow/when_uploading_from_URL/handles_async_URL_upload_with_status_checking.yml` around lines 10 - 11, The User-Agent is not being forced for public POST /from_url/ requests because the SDK sets it with ||=, allowing Faraday's default to remain; change the User-Agent assignment in the SDK's user-agent setup (e.g., the method that builds the header like set_user_agent_header or configure_user_agent used by Uploadcare::UploadClient / build_connection) from a conditional-assign (||=) to an unconditional assign (=) so the UploadcareRuby/... string is applied to every request (including post_from_url/public endpoints and the from_url status checker); alternatively, if the inconsistency is accepted, update the cassette to reflect the current behavior for the POST request.examples/batch_upload.rb-26-54 (1)
26-54:⚠️ Potential issue | 🟡 MinorPotential file handle leak if opening fails partway.
If
File.openfails for one of the paths (e.g., permission error), previously opened files infileswon't be closed since theensureblock only runs after thebeginblock starts. Consider opening files inside thebeginblock or using a safer pattern.🛡️ Suggested fix
-files = file_paths.map { |path| File.open(path, 'rb') } - begin + files = file_paths.map { |path| File.open(path, 'rb') } + client = Uploadcare::Client.new( public_key: ENV.fetch('UPLOADCARE_PUBLIC_KEY'), secret_key: ENV.fetch('UPLOADCARE_SECRET_KEY') ) # ... rest of code rescue StandardError => e puts "✗ Batch upload failed: #{e.message}" exit 1 ensure - files.each(&:close) + files&.each(&:close) end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/batch_upload.rb` around lines 26 - 54, The current mapping files = file_paths.map { |path| File.open(path, 'rb') } can leak file handles if File.open raises partway; move the file-opening into the begin block (or use per-file File.open with block) so that any exception occurs after you can safely close already-opened files in the ensure, and change the ensure to only iterate over files that were successfully opened (e.g., ensure files is initialized to [] before begin and push each opened file to files inside begin). Update references to files, File.open, the begin/rescue/ensure structure and results = client.uploads.upload(files, ...) accordingly.examples/large_file_upload.rb-52-65 (1)
52-65:⚠️ Potential issue | 🟡 MinorPotential division by zero in progress calculation.
If
progress[:total]is 0 (edge case), line 55 will cause aZeroDivisionError. Consider adding a guard.🛡️ Suggested fix
) do |progress| uploaded_mb = (progress[:uploaded] / 1024.0 / 1024.0).round(2) total_mb = (progress[:total] / 1024.0 / 1024.0).round(2) - percentage = ((progress[:uploaded].to_f / progress[:total]) * 100).round + percentage = progress[:total].positive? ? ((progress[:uploaded].to_f / progress[:total]) * 100).round : 0 part = progress[:part] total_parts = progress[:total_parts]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/large_file_upload.rb` around lines 52 - 65, Guard the division by zero when computing percentage from progress by checking progress[:total] (and nil) before dividing: compute a local total = progress[:total].to_f and set percentage = total > 0 ? ((progress[:uploaded].to_f / total) * 100).round : 0; keep the rest of the progress display logic (uploaded_mb, total_mb, bar, filled) unchanged but use the safe percentage variable so ZeroDivisionError cannot occur in the block that processes the progress hash.examples/upload_with_progress.rb-9-15 (1)
9-15:⚠️ Potential issue | 🟡 MinorValidate for a readable regular file, not just existence.
Line [9] currently allows directories and unreadable paths, which then fail deeper in the flow with a less helpful error.
Suggested patch
-unless file_path && File.exist?(file_path) +unless file_path && File.file?(file_path) && File.readable?(file_path) puts 'Usage: ruby upload_with_progress.rb <file_path>'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/upload_with_progress.rb` around lines 9 - 15, The current check uses File.exist? and allows directories or unreadable files; update the validation around file_path in upload_with_progress.rb to require a regular, readable file by using File.file?(file_path) and File.readable?(file_path) (replace the existing File.exist? check), and keep the same usage/exit behavior so directories or unreadable paths print the usage message and exit early.lib/uploadcare/collections/batch_result.rb-32-35 (1)
32-35:⚠️ Potential issue | 🟡 MinorMake
result/problemskeyword args optional in initializer.Line [32] currently requires both keys even though Line [34]-Line [35] already handle missing values gracefully.
Suggested patch
- def initialize(status:, result:, problems:, client:) + def initialize(status:, result: nil, problems: nil, client:) `@status` = status - `@result` = result ? result.map { |file_data| Uploadcare::Resources::File.new(file_data, client) } : [] + `@result` = Array(result).map { |file_data| Uploadcare::Resources::File.new(file_data, client) } `@problems` = problems || {} end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/uploadcare/collections/batch_result.rb` around lines 32 - 35, The initialize method for BatchResult requires result and problems keywords but already handles nils; update the initializer signature to make result and problems optional (e.g., result: nil, problems: nil) and keep the existing lines that map `@result` and default `@problems` so the method still creates Uploadcare::Resources::File objects when result is present and falls back to []/{} when absent; modify the initialize definition (method name: initialize) accordingly and do not change the mapping logic.examples/upload_with_progress.rb-41-50 (1)
41-50:⚠️ Potential issue | 🟡 MinorHarden progress math against zero/unknown totals.
Line [43] and Line [48] can produce unstable values (Infinity/NaN) when total or elapsed is zero.
Suggested patch
- uploaded_mb = (progress[:uploaded] / 1024.0 / 1024.0).round(2) - total_mb = (progress[:total] / 1024.0 / 1024.0).round(2) - percentage = ((progress[:uploaded].to_f / progress[:total]) * 100).round + uploaded_bytes = progress[:uploaded].to_i + total_bytes = progress[:total].to_i + next if total_bytes <= 0 + + uploaded_mb = (uploaded_bytes / 1024.0 / 1024.0).round(2) + total_mb = (total_bytes / 1024.0 / 1024.0).round(2) + percentage = ((uploaded_bytes.fdiv(total_bytes)) * 100).clamp(0, 100).round @@ - speed_mbps = uploaded_mb / elapsed + speed_mbps = elapsed.positive? ? ((uploaded_bytes / 1024.0 / 1024.0) / elapsed) : 0.0🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/upload_with_progress.rb` around lines 41 - 50, Guard against division-by-zero/NaN by checking progress[:total] and elapsed before doing math: only compute total_mb, percentage, uploaded_mb/total ratios when progress[:total].to_f.positive? (otherwise set percentage to 0 or nil and total_mb to uploaded_mb or 0), and only compute speed_mbps and eta_seconds when elapsed = Time.now - start_time is > 0 and speed_mbps.positive?; update the calculations for uploaded_mb, total_mb, percentage, speed_mbps and eta_seconds to early-return or default values when progress[:total] or elapsed are zero/unknown to avoid Infinity/NaN.lib/uploadcare/resources/document_conversion.rb-32-44 (1)
32-44:⚠️ Potential issue | 🟡 MinorMissing parameter validation and inconsistent return type.
Two concerns with
convert_document:
Missing validation: Unlike
VideoConversion.convert(seelib/uploadcare/resources/video_conversion.rblines 12-14) which validates required params, this method doesn't check for:uuidor:format, potentially causing confusing errors downstream.Inconsistent return type: This returns a raw Hash, while
VideoConversion.convertreturns aVideoConversioninstance. Consider returningnew(response, resolved_client)for consistency.🛠️ Proposed fix to add validation and consistent return type
def self.convert_document(params:, options: {}, client: nil, config: Uploadcare.configuration, request_options: {}) + raise ArgumentError, 'params must include :uuid' unless params[:uuid] + raise ArgumentError, 'params must include :format' unless params[:format] + resolved_client = resolve_client(client: client, config: config) paths = Array(params[:uuid]).map do |uuid| "#{uuid}/document/-/format/#{params[:format]}/" end - Uploadcare::Result.unwrap( + response = Uploadcare::Result.unwrap( resolved_client.api.rest.document_conversions.convert( paths: paths, options: options, request_options: request_options ) ) + new(response, resolved_client) end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/uploadcare/resources/document_conversion.rb` around lines 32 - 44, In convert_document (DocumentConversion.convert_document) add the same parameter validation as VideoConversion.convert by ensuring params includes :uuid and :format (raise or return a helpful error if missing), and after calling resolved_client.api.rest.document_conversions.convert keep the unwrapped response in a variable and return an instance (new(response, resolved_client)) instead of returning the raw Hash so the method returns a DocumentConversion object consistently with VideoConversion; use the existing resolved_client and Uploadcare::Result.unwrap flow but wrap the unwrapped response before returning.api_examples/support/example_helper.rb-10-15 (1)
10-15:⚠️ Potential issue | 🟡 MinorUse the compact module form to satisfy RuboCop's style requirement.
The nested module definition violates RuboCop's
Style/ClassAndModuleChildrenrule, which is configured to enforce the compact style. Change tomodule ApiExamples::ExampleHelperto resolve the offense while maintaining identical behavior.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api_examples/support/example_helper.rb` around lines 10 - 15, The nested module definition should be converted to the compact form to satisfy RuboCop's Style/ClassAndModuleChildren rule: replace the separate "module ApiExamples" and nested "module ExampleHelper" with a single compact declaration "module ApiExamples::ExampleHelper" while keeping the existing constants (SAMPLE_IMAGE_URL, SAMPLE_VIDEO_URL) and the module_function line unchanged to preserve behavior.lib/uploadcare.rb-6-14 (1)
6-14:⚠️ Potential issue | 🟡 MinorCollapse the split top-level
Uploadcaremodule definition.The empty module definition at line 6 followed by the reopened module at line 11 violates RuboCop's
Style/OneClassPerFilerule, which is enabled by theAllCops: NewCops: enablesetting. Merge both blocks into a single module definition, or move therequire_relativestatement outside the module namespace.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/uploadcare.rb` around lines 6 - 14, Remove the duplicated top-level module declaration by merging the empty "module Uploadcare" and the later "module Uploadcare" block into a single module definition: keep the require_relative 'uploadcare/cname_generator' either outside the module or move that require inside the single consolidated "module Uploadcare" and then initialize `@loader` = Zeitwerk::Loader.for_gem and call `@loader.setup` within that one module; ensure only one "module Uploadcare" exists in this file and that references to `@loader` and Zeitwerk::Loader.for_gem remain inside that single module scope (or keep the require_relative outside the module if you prefer).
🧹 Nitpick comments (21)
spec/fixtures/vcr_cassettes/Upload_API_Integration/Edge_Cases/when_uploading_very_small_files/handles_small_fixture_files.yml (1)
25-56: Optional: trim volatile response headers from cassette snapshots.If these headers are not asserted in specs, excluding them can reduce re-record noise and fixture churn.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@spec/fixtures/vcr_cassettes/Upload_API_Integration/Edge_Cases/when_uploading_very_small_files/handles_small_fixture_files.yml` around lines 25 - 56, The cassette contains volatile response headers that cause churn; in the Upload_API_Integration cassette (handles_small_fixture_files.yml) remove or normalize non-deterministic headers such as Date, Transfer-Encoding, Connection, Server, X-Uploadcare-Request-Id (and optionally Vary) from the headers block so tests don’t re-record for irrelevant changes — edit the YAML to delete those header entries (or replace their values with a fixed placeholder) while keeping asserted headers like Content-Type and Access-Control-* intact.lib/uploadcare/internal/upload_io.rb (1)
41-44: Avoid mutating caller stream cursor position.Lines 41 and 44 always rewind the source, which can unexpectedly alter caller state. Consider preserving and restoring the original position when possible.
Proposed refinement
- source.rewind if source.respond_to?(:rewind) - IO.copy_stream(source, tempfile) - tempfile.rewind - source.rewind if source.respond_to?(:rewind) + original_pos = source.pos if source.respond_to?(:pos) + source.rewind if source.respond_to?(:rewind) + IO.copy_stream(source, tempfile) + tempfile.rewind + source.seek(original_pos, IO::SEEK_SET) if original_pos🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/uploadcare/internal/upload_io.rb` around lines 41 - 44, The code unconditionally calls source.rewind before and after IO.copy_stream which mutates the caller's stream position; change this to preserve and restore the original cursor when possible by: check that source.respond_to?(:pos) or source.respond_to?(:seek), capture the current position (e.g. pos = source.pos) before rewinding, perform any necessary rewind only if seekable, do IO.copy_stream(source, tempfile), and then restore the original position with source.seek(pos, IO::SEEK_SET) (or equivalent) instead of always calling source.rewind; ensure you still handle non-seekable streams by only using rewind/seek when supported (referencing the source, tempfile, IO.copy_stream and source.rewind calls in the current diff).spec/fixtures/vcr_cassettes/Upload_API_Integration/Performance/Upload_speed/uploads_files_in_reasonable_time.yml (1)
1-63: Note: VCR cassette may not meaningfully test upload speed.VCR replays recorded responses instantly, so this cassette won't validate actual upload performance. If the intent is to ensure uploads complete without timeout errors in a reasonable timeframe, consider either:
- Renaming to reflect the actual test purpose (e.g., "upload completes successfully")
- Adding a comment in the spec explaining that real performance testing requires live API calls
For CI stability purposes, the current approach is acceptable.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@spec/fixtures/vcr_cassettes/Upload_API_Integration/Performance/Upload_speed/uploads_files_in_reasonable_time.yml` around lines 1 - 63, The VCR cassette "Upload_API_Integration/Performance/Upload_speed/uploads_files_in_reasonable_time.yml" falsely implies a performance test — update the test/cassette to reflect its real purpose by either renaming the cassette and any referenced example/spec from "uploads_files_in_reasonable_time" to a clearer name like "uploads_files_successfully" (or similar) or add a concise comment in the spec that uses this cassette explaining that VCR replays responses instantly and this cassette does not measure real upload speed; ensure you update any references to the cassette name in the test helper or VCR.use_cassette call to keep CI stable.spec/fixtures/vcr_cassettes/Upload_API_Integration/Smart_Upload_Detection/detects_and_uses_correct_upload_method_for_URLs.yml (1)
8-8: Pin the URL-upload source to an immutable blob before re-recording.Line 8 points at
raw.githubusercontent.com/.../main/..., so any future cassette refresh depends on mutable branch state outside Uploadcare. Please pin the source to a commit SHA in the generating spec/example and then re-record this fixture.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@spec/fixtures/vcr_cassettes/Upload_API_Integration/Smart_Upload_Detection/detects_and_uses_correct_upload_method_for_URLs.yml` at line 8, The test fixture uses a mutable raw.githubusercontent.com URL (source_url query param) pointing at the `main` branch; change that URL to reference an immutable commit SHA (replace `/main/` with `/commit/<SHA>/` or use the blob URL with the specific SHA) in the `source_url` value inside the recorded request string, then re-run the spec to re-record the cassette so the YAML contains the committed-blob URL instead of the branch-based URL.lib/uploadcare/api/upload/groups.rb (1)
59-71: Consider simplifying the options access pattern.The signature/expire extraction repeats the symbol-or-string lookup. A helper variable would make this cleaner:
♻️ Optional simplification
def build_group_params(files, options) params = { 'pub_key' => upload.config.public_key } files.each_with_index do |file, index| uuid = file.respond_to?(:uuid) ? file.uuid : file.to_s params["files[#{index}]"] = uuid end - params['signature'] = (options[:signature] || options['signature']).to_s if options[:signature] || options['signature'] - params['expire'] = (options[:expire] || options['expire']).to_s if options[:expire] || options['expire'] + signature = options[:signature] || options['signature'] + expire = options[:expire] || options['expire'] + + params['signature'] = signature.to_s if signature + params['expire'] = expire.to_s if expire params end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/uploadcare/api/upload/groups.rb` around lines 59 - 71, build_group_params repeats the symbol-or-string lookup for options[:signature]/options['signature'] and options[:expire]/options['expire']; simplify by extracting each value into a helper variable (e.g., signature_val = options[:signature] || options['signature'] and expire_val = options[:expire] || options['expire']) and then set params['signature'] = signature_val.to_s if signature_val and params['expire'] = expire_val.to_s if expire_val, keeping the existing behavior and using the build_group_params method name to locate the change.lib/uploadcare/configuration.rb (1)
39-40: Consider nil-checking semantics for environment fallback.When
public_keyorsecret_keyis not provided and the environment variable is missing, an empty string''is assigned. Downstream code that checkspublic_key.nil?will getfalseeven when no key was actually configured. This could lead to confusing errors when making API calls.Consider using
nilas the default instead of''if the intent is to indicate "not configured":💡 Suggested change
- values[:public_key] = ENV.fetch('UPLOADCARE_PUBLIC_KEY', '') if values[:public_key].nil? - values[:secret_key] = ENV.fetch('UPLOADCARE_SECRET_KEY', '') if values[:secret_key].nil? + values[:public_key] = ENV.fetch('UPLOADCARE_PUBLIC_KEY', nil) if values[:public_key].nil? + values[:secret_key] = ENV.fetch('UPLOADCARE_SECRET_KEY', nil) if values[:secret_key].nil?🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/uploadcare/configuration.rb` around lines 39 - 40, The current fallback assigns empty strings to values[:public_key] and values[:secret_key] which masks "not configured" checks; change the fallback to nil by using ENV[...] (or ENV.fetch(..., nil)) instead of ENV.fetch(..., '') so that when no env var is present the resulting values remain nil; update the assignments referencing values[:public_key] and values[:secret_key] accordingly so downstream nil? checks behave correctly.examples/simple_upload.rb (2)
36-38: Write failure messages to STDERR.For CLI scripts, errors should go to STDERR to keep STDOUT clean for piping/automation.
Suggested patch
rescue StandardError => e - puts "✗ Upload failed: #{e.message}" + warn "✗ Upload failed: #{e.message}" exit 1 end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/simple_upload.rb` around lines 36 - 38, The rescue block currently writes the failure message to STDOUT using puts in the rescue StandardError => e handler; change it to write to STDERR (e.g., use STDERR.puts or $stderr.puts) so CLI error output goes to stderr, and keep the exit 1 behavior intact (refer to the rescue StandardError => e handler and the existing exit 1).
9-13: Tighten path validation to files only.Line 9 uses
File.exist?, which allows directories and defers failure to Line 25. PreferFile.file?for early, clearer validation.Suggested patch
-unless file_path && File.exist?(file_path) +unless file_path && File.file?(file_path) puts 'Usage: ruby simple_upload.rb <file_path>' puts 'Example: ruby simple_upload.rb photo.jpg' exit 1 end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/simple_upload.rb` around lines 9 - 13, The current check uses File.exist?(file_path) which passes for directories; change the validation to ensure the path is a regular file by replacing File.exist? with File.file? when checking the variable file_path in the top-level validation block so the script errors early and clearly for non-file paths (keep the same usage messages and exit 1 behavior).examples/README.md (1)
23-33: Optional readability polish for repetitive bullet phrasing.Several adjacent descriptions start with “Upload …”; varying the lead verb would make the list scan a bit smoother.
Example wording tweak
- `examples/simple_upload.rb` - Upload one file and print its UUID and CDN URL. + Send one file and print its UUID and CDN URL. - `examples/upload_with_progress.rb` - Upload one large file with multipart progress reporting. + Demonstrate multipart progress reporting on a large file upload. - `examples/batch_upload.rb` - Upload multiple files in one call. + Submit multiple files in one call.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/README.md` around lines 23 - 33, The list in README.md repeats the verb "Upload" across several bullets; edit the bullet lines referencing example files (examples/upload_with_progress.rb, examples/batch_upload.rb, examples/large_file_upload.rb, examples/url_upload.rb, examples/group_creation.rb) to vary the lead verbs for readability (e.g., "Upload one file and print..." -> keep, change others to "Show multipart progress for a large file", "Send multiple files in one call", "Force multipart upload and display throughput", "Import a remote URL and demonstrate async polling", "Create a group from multiple uploads and print details") so adjacent descriptions don't all start with the same word while preserving the filenames and intent.api_examples/README.md (1)
24-25: Avoid hard-coded verification dates in long-lived docs.The fixed date will go stale quickly. Consider wording this as “last verified” with an update convention (release checklist/CI-generated docs) to keep trust high.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api_examples/README.md` around lines 24 - 25, The README line "Verified against a real Uploadcare demo account on `2026-03-16`" hard-codes a date that will become stale; update that line in api_examples/README.md to a non‑static phrasing such as "Last verified: <date>" or "Last verified against a real Uploadcare demo account" and add a short note about the update convention (for example referencing the canonical scripts in `api_examples/rest_api` and `api_examples/upload_api` and stating that verification is performed during releases or CI) so maintainers know when/how to refresh the verification info.examples/url_upload.rb (1)
20-24: Consider user-friendly error handling for missing environment variables.
ENV.fetchraisesKeyErrorwith a generic message if the variable is missing. For a user-facing example script, consider catching this or providing a more helpful message upfront.💡 Suggested improvement
begin + public_key = ENV['UPLOADCARE_PUBLIC_KEY'] + secret_key = ENV['UPLOADCARE_SECRET_KEY'] + + unless public_key && secret_key + puts 'Error: UPLOADCARE_PUBLIC_KEY and UPLOADCARE_SECRET_KEY environment variables are required.' + puts 'Set them via: export UPLOADCARE_PUBLIC_KEY=your_key UPLOADCARE_SECRET_KEY=your_secret' + exit 1 + end + client = Uploadcare::Client.new( - public_key: ENV.fetch('UPLOADCARE_PUBLIC_KEY'), - secret_key: ENV.fetch('UPLOADCARE_SECRET_KEY') + public_key: public_key, + secret_key: secret_key )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/url_upload.rb` around lines 20 - 24, The script currently uses ENV.fetch for UPLOADCARE_PUBLIC_KEY and UPLOADCARE_SECRET_KEY which raises KeyError with a generic message; update examples/url_upload.rb to detect missing env vars before calling Uploadcare::Client.new (or rescue KeyError around that call) and print a clear, user-friendly message telling the user which variable(s) are missing and how to set them so the example can run; reference the Uploadcare::Client.new invocation and the ENV.fetch calls when making the change.spec/fixtures/vcr_cassettes/upload_from_url_basic.yml (1)
1-58: Consider a more descriptive cassette filename.This cassette captures a 403 failure scenario where
pub_keyis empty (line 8). The nameupload_from_url_basic.ymlsuggests a successful basic upload flow rather than an error case. Consider renaming to something likeupload_from_url_missing_pub_key.ymlorupload_from_url_403_error.ymlto better reflect its purpose.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@spec/fixtures/vcr_cassettes/upload_from_url_basic.yml` around lines 1 - 58, The cassette filename upload_from_url_basic.yml is misleading because it records a 403 missing pub_key error; rename the file to a descriptive name such as upload_from_url_missing_pub_key.yml (or upload_from_url_403_error.yml) and update any test references that load the cassette (look for VCR.use_cassette or cassette name strings referencing "upload_from_url_basic.yml") to the new filename so the fixture name matches the recorded scenario.lib/uploadcare/api/rest/video_conversions.rb (1)
36-39: Consider URL-encoding the token parameter.The
tokenis interpolated directly into the URL path. If tokens could contain special characters (unlikely but possible), consider usingCGI.escapeorURI.encode_www_form_component. If tokens are guaranteed to be alphanumeric/safe, this is fine as-is.+ require 'cgi' + def status(token:, request_options: {}) - rest.get(path: "/convert/video/status/#{token}/", params: {}, headers: {}, + rest.get(path: "/convert/video/status/#{CGI.escape(token.to_s)}/", params: {}, headers: {}, request_options: request_options) end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/uploadcare/api/rest/video_conversions.rb` around lines 36 - 39, The status method currently interpolates token directly into the path (in the rest.get call), which can break if token contains special characters; update status to URL-encode the token (e.g., via CGI.escape or URI.encode_www_form_component) before building the path and ensure the encoding library is required if not already present so the rest.get path uses the encoded token.api_examples/support/run_rest_example.rb (1)
9-157: Address RuboCop ModuleLength violation.The CI pipeline is failing because this module exceeds the line limit. Consider extracting related example groups into separate methods or modules (e.g.,
run_file_examples,run_addon_examples,run_conversion_examples) to improve maintainability and satisfy the linter.♻️ Suggested refactoring approach
module ApiExamples module RunRestExample module_function + def run_file_examples(client) + case File.basename($PROGRAM_NAME) + when 'get_files.rb' then client.files.list(limit: 2) + when 'get_files_uuid.rb' + ExampleHelper.with_uploaded_file { |file| client.files.find(uuid: file.uuid) } + # ... other file examples + end + end + + def run_addon_examples(client) + # ... addon examples + end + def call ExampleHelper.run do |client| - case File.basename($PROGRAM_NAME) - when 'get_project.rb' - # ... large case statement + run_file_examples(client) || + run_addon_examples(client) || + run_conversion_examples(client) || + raise("No REST API example mapped for #{File.basename($PROGRAM_NAME)}") end end end end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api_examples/support/run_rest_example.rb` around lines 9 - 157, The method call in this file is too long (RuboCop ModuleLength); refactor by extracting related case branches into smaller helper methods and delegate from call to those helpers—for example, create run_file_examples(client), run_group_examples(client), run_webhook_examples(client), run_addon_examples(client) and run_conversion_examples(client) and move the matching when branches (file-related, group-related, webhook-related, addon-related, conversion-related) into those methods; update call to invoke the appropriate helper based on File.basename($PROGRAM_NAME) (or have call dispatch to a mapping of filename => helper) while preserving existing behavior, using the same symbols (call, ExampleHelper.run, ExampleHelper.with_uploaded_file/with_uploaded_files/with_uploaded_group/with_webhook/with_uploaded_pdf/with_uploaded_video) and keeping the fallback raise unchanged.lib/uploadcare/internal/throttle_handler.rb (1)
23-34: Guard against non-positivemax_attempts.If
max_attemptsis 0 or negative (e.g., due to misconfiguration),(attempts - 1).timeswill not iterate, but the finalyieldon line 33 still executes once. This may be acceptable, but if a negative value is unexpected, consider adding validation.🛡️ Optional defensive guard
def handle_throttling(max_attempts: nil) attempts = max_attempts if attempts.nil? attempts = respond_to?(:config) ? config.max_throttle_attempts : Uploadcare.configuration.max_throttle_attempts end + attempts = [attempts.to_i, 1].max # Ensure at least one attempt (attempts - 1).times do |index| return yield rescue Uploadcare::Exception::ThrottleError => e sleep(e.timeout * (2**index)) end yield end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/uploadcare/internal/throttle_handler.rb` around lines 23 - 34, The method handle_throttling should guard against non-positive max_attempts: validate attempts after computing it (from the max_attempts param or config.max_throttle_attempts) and if attempts is nil or <= 0, raise an ArgumentError (or normalize to a minimum of 1) to avoid unexpected behavior; update handle_throttling to check attempts and either raise ArgumentError with a clear message referencing max_attempts (or set attempts = 1) before entering the (attempts - 1).times loop and before yielding, and keep the existing rescue of Uploadcare::Exception::ThrottleError intact.lib/uploadcare/api/rest/document_conversions.rb (1)
24-27: UUID not URI-encoded in path.Unlike
FileMetadatawhich usesURI.encode_www_form_component(uuid), theuuidparameter here is directly interpolated into the path. If UUIDs could contain special characters (unlikely for standard UUIDs but possible for malformed input), this could cause issues.Consider adding URI encoding for consistency with other endpoints in this codebase, or verify that UUIDs are validated upstream.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/uploadcare/api/rest/document_conversions.rb` around lines 24 - 27, The info method in DocumentConversions is interpolating uuid directly into the request path; update it to URI-encode the uuid like other endpoints (e.g., use URI.encode_www_form_component(uuid)) before building the path so special characters are safely escaped; locate the info method in document_conversions.rb and change the path construction for rest.get to use the encoded uuid.lib/uploadcare/client/groups_accessor.rb (1)
12-16: Positional argument foruuidsdiffers from keyword-arg pattern.The
createmethod acceptsuuidsas a positional argument while other accessors consistently use keyword arguments (e.g.,find(group_id:, ...)). This inconsistency may be intentional for ergonomics, but consider aligning with the keyword pattern for API consistency.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/uploadcare/client/groups_accessor.rb` around lines 12 - 16, The create method in GroupsAccessor currently takes uuids as a positional arg; change its signature to use a keyword argument (def create(uuids:, request_options: {}, **options)) and pass that keyword through to Uploadcare::Resources::Group.create (i.e., Uploadcare::Resources::Group.create(uuids: uuids, client: client, request_options: request_options, **options)); also update any call sites to call create(uuids: <array>) so the accessor matches the other keyword-arg accessors.lib/uploadcare/resources/document_conversion.rb (1)
16-22: Instance methodinforequiresuuidparameter despite being on an instance.The
infomethod is an instance method but requiresuuidas a parameter rather than using an instance's own UUID. This is unusual—typically instance methods operate on the instance's data. Consider either:
- Making this a class method (like
info_for)- Having the instance store a
uuidand use it internally🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/uploadcare/resources/document_conversion.rb` around lines 16 - 22, The instance method `info` currently requires a `uuid:` parameter even though it should operate on the instance; change it to use the instance's stored identifier (e.g., `self.uuid`) or convert it to a class method. Specifically, either (A) remove the `uuid:` parameter from `def info(...)` and call the API with `uuid: uuid || self.uuid` (or simply `self.uuid`) inside `Uploadcare::Result.unwrap(... )`, ensuring the instance has an attribute `uuid` and update callers accordingly, or (B) rename and convert the method to a class method (e.g., `self.info_for(uuid:, request_options: {})`) that keeps the `uuid:` parameter and leaves the instance API alone; adjust the method signature and usages depending on which approach you choose (method name: `info`, alternative: `info_for`).lib/uploadcare/api/rest/files.rb (1)
42-44: UUID not URI-encoded in paths.Similar to
DocumentConversions, theuuidparameter is directly interpolated without URI encoding. For consistency withFileMetadatawhich encodes path components, consider applying the same treatment here.♻️ Proposed fix to add URI encoding
Add at top of file:
require 'uri'Then in methods:
def info(uuid:, params: {}, request_options: {}) - rest.get(path: "/files/#{uuid}/", params: params, headers: {}, request_options: request_options) + encoded_uuid = URI.encode_www_form_component(uuid) + rest.get(path: "/files/#{encoded_uuid}/", params: params, headers: {}, request_options: request_options) endAlso applies to: 52-54, 62-65
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/uploadcare/api/rest/files.rb` around lines 42 - 44, The UUID is interpolated into request paths without URI-encoding; add require 'uri' at the top of the file and replace direct interpolation of uuid in methods like info (and the other file methods at the noted ranges) with an encoded variable (e.g. encoded_uuid = URI.encode_www_form_component(uuid.to_s)) and use "/files/#{encoded_uuid}/" (and equivalent paths) so path components are properly escaped, consistent with FileMetadata's approach.lib/uploadcare/resources/file.rb (2)
40-48: Consider simplifying the conditional branch.The branching logic for empty vs non-empty params appears unnecessary. Passing an empty hash should produce the same result and simplify the code.
♻️ Proposed simplification
- response = if params.empty? - Uploadcare::Result.unwrap( - resolved_client.api.rest.files.info(uuid: uuid, request_options: request_options) - ) - else - Uploadcare::Result.unwrap( - resolved_client.api.rest.files.info(uuid: uuid, params: params, request_options: request_options) - ) - end + response = Uploadcare::Result.unwrap( + resolved_client.api.rest.files.info(uuid: uuid, params: params, request_options: request_options) + )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/uploadcare/resources/file.rb` around lines 40 - 48, The conditional branch around params can be removed: always call Uploadcare::Result.unwrap with resolved_client.api.rest.files.info passing uuid:, params:, and request_options: (allowing params to be an empty hash), so eliminate the if/else and unify into a single call to resolved_client.api.rest.files.info(uuid: uuid, params: params, request_options: request_options) wrapped by Uploadcare::Result.unwrap.
320-333: Consider documenting expected converter interface.The duck typing check for
convert_documentorconvertmethods works but the error message could be more helpful by listing the expected interface.📝 Optional: More descriptive error message
else raise Uploadcare::Exception::ConversionError, - "Converter #{converter.name} does not respond to convert_document or convert" + "Converter #{converter.name} must implement either :convert_document or :convert class method" end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/uploadcare/resources/file.rb` around lines 320 - 333, The current duck-typing error raised when a converter lacks the expected methods is too vague; update the raise in the conversion branch that references converter, convert_document, and convert (the block that calls converter.convert_document or converter.convert with params_with_symbolized_keys, options, config, and request_options) to include a clearer description of the expected converter interface (e.g. required method names such as convert_document(params:, options:, config:, request_options:) or convert(params:, options:, config:, request_options:) and any required return shape), using Uploadcare::Exception::ConversionError and the converter.name to provide context so troubleshooting is easier.
...ad_Workflow/Multipart_Upload_Complete_Verify/performs_complete_multipart_upload_workflow.yml
Outdated
Show resolved
Hide resolved
...es/vcr_cassettes/Upload_API_Integration/Edge_Cases/Very_small_files/handles_1-byte_files.yml
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (3)
examples/group_creation.rb (1)
69-75: Consider using structured file objects instead of hash access.The code accesses file data using string keys (
file['original_filename'],file['uuid']), which suggestsinfo.filesreturns raw hashes. If the API returns structuredFileobjects, consider using accessor methods for type safety. Also, manually constructing the CDN URL could be replaced with the file'scdn_urlmethod if available.🔧 Potential refinement if files are File objects
Array(info.files).each_with_index do |file, index| - puts "#{index + 1}. #{file['original_filename']}" - puts " UUID: #{file['uuid']}" - puts " Size: #{(file['size'] / 1024.0).round(2)} KB" - puts " URL: #{client.config.cdn_base}#{file['uuid']}/" + puts "#{index + 1}. #{file.original_filename}" + puts " UUID: #{file.uuid}" + puts " Size: #{(file.size / 1024.0).round(2)} KB" + puts " URL: #{file.cdn_url}" puts end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/group_creation.rb` around lines 69 - 75, Replace raw hash key access inside the Array(info.files).each_with_index loop with the File object's accessors (e.g., use file.original_filename, file.uuid, file.size) and, if available, use file.cdn_url or file.url instead of manually concatenating client.config.cdn_base + file['uuid']; also compute KB from the file.size accessor rather than file['size'] so the loop operates on typed objects and gains type safety and clearer intent..rubocop.yml (1)
104-171: Consider incrementally re-enabling RSpec cops over time.Disabling ~20 RSpec cops is reasonable during a major refactor to reduce noise, but this broadly disables useful checks like
RSpec/VerifiedDoubles,RSpec/MultipleExpectations, andRSpec/ExampleLength. Consider tracking these as tech debt to incrementally re-enable and fix violations in follow-up PRs.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.rubocop.yml around lines 104 - 171, The .rubocop.yml currently disables many RSpec cops (e.g., RSpec/VerifiedDoubles, RSpec/MultipleExpectations, RSpec/ExampleLength, RSpec/DescribedClass, RSpec/AnyInstance), which should be tracked and incrementally re-enabled; create a short tech-debt plan and re-enable these cops progressively by (1) adding a checklist or TODOs in the repo (or a ISSUE) listing each disabled cop from the diff, (2) re-enabling a small subset (start with RSpec/VerifiedDoubles and RSpec/AnyInstance) in .rubocop.yml and fixing the test violations, and (3) repeating until all listed cops (including RSpec/MultipleExpectations, RSpec/ExampleLength, RSpec/DescribedClass, RSpec/DescribedClass, RSpec/BeEq, etc.) are restored; include the cop names above when creating the issue so reviewers can track per-cop work.examples/upload_with_progress.rb (1)
24-25: Consider using consistent size units in threshold messaging.You display MB using 1024-based conversion but compare against
10_000_000bytes. Aligning threshold + message units would make the hint clearer.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/upload_with_progress.rb` around lines 24 - 25, The message uses "10MB" (implying 1024-based MB) but the code compares file_size to 10_000_000 bytes; make the units consistent by replacing the literal threshold with a 1024-based constant (e.g., threshold = 10 * 1024 * 1024) and use that threshold variable in the condition (file_size < threshold) and the printed message (or label it explicitly as "10 MiB"); reference the file_size variable and the existing literal 10_000_000 to locate where to change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/ruby.yml:
- Around line 55-65: The diff uses HEAD^ for push events in the workflow
conditional (the else branch where you compute files with git diff --name-only
--diff-filter=ACMRT HEAD^ HEAD -- '*.rb'), which can fail on shallow checkouts;
update the workflow to ensure the parent commit exists before diffing by
fetching more history (e.g., run a git fetch --no-tags --prune --unshallow or
git fetch origin --depth=2) or change the baseline to a fetched ref (e.g., fetch
origin/${{ github.base_ref }} and diff against it) so the git diff invocation
that references HEAD^ succeeds reliably in the non-pull_request branch.
In `@api_examples/support/run_rest_example.rb`:
- Around line 136-148: The response access is inconsistent between document and
video conversion examples: in the documents flow code uses
response.fetch('result') while the videos flow uses
response.result.first.fetch('token'); confirm the actual response shape from
client.conversions.documents.convert and client.conversions.videos.convert and
make both examples use the same access pattern (prefer changing
response.fetch('result') to response.result if the SDK returns an object with a
result method, or change response.result... to response.fetch('result') if it
returns a hash), updating the examples that call ExampleHelper.with_uploaded_pdf
and ExampleHelper.with_uploaded_video and the subsequent status calls (look for
client.conversions.documents.convert, client.conversions.videos.convert, and the
lines that extract 'token') so both use the identical, correct accessor.
In `@examples/upload_with_progress.rb`:
- Around line 10-11: The usage message currently hardcodes "ruby
upload_with_progress.rb ..." (the two puts lines), which is misleading when run
from the repo root; update the usage output to show the correct path or derive
the script name dynamically (e.g., use the running script name or __FILE__/$0)
so the printed example reflects "ruby examples/upload_with_progress.rb
<file_path>" or prints the actual invoked script path; modify the two puts lines
(the usage and example strings) to use that dynamic value so documentation
matches how users will run the script.
---
Nitpick comments:
In @.rubocop.yml:
- Around line 104-171: The .rubocop.yml currently disables many RSpec cops
(e.g., RSpec/VerifiedDoubles, RSpec/MultipleExpectations, RSpec/ExampleLength,
RSpec/DescribedClass, RSpec/AnyInstance), which should be tracked and
incrementally re-enabled; create a short tech-debt plan and re-enable these cops
progressively by (1) adding a checklist or TODOs in the repo (or a ISSUE)
listing each disabled cop from the diff, (2) re-enabling a small subset (start
with RSpec/VerifiedDoubles and RSpec/AnyInstance) in .rubocop.yml and fixing the
test violations, and (3) repeating until all listed cops (including
RSpec/MultipleExpectations, RSpec/ExampleLength, RSpec/DescribedClass,
RSpec/DescribedClass, RSpec/BeEq, etc.) are restored; include the cop names
above when creating the issue so reviewers can track per-cop work.
In `@examples/group_creation.rb`:
- Around line 69-75: Replace raw hash key access inside the
Array(info.files).each_with_index loop with the File object's accessors (e.g.,
use file.original_filename, file.uuid, file.size) and, if available, use
file.cdn_url or file.url instead of manually concatenating
client.config.cdn_base + file['uuid']; also compute KB from the file.size
accessor rather than file['size'] so the loop operates on typed objects and
gains type safety and clearer intent.
In `@examples/upload_with_progress.rb`:
- Around line 24-25: The message uses "10MB" (implying 1024-based MB) but the
code compares file_size to 10_000_000 bytes; make the units consistent by
replacing the literal threshold with a 1024-based constant (e.g., threshold = 10
* 1024 * 1024) and use that threshold variable in the condition (file_size <
threshold) and the printed message (or label it explicitly as "10 MiB");
reference the file_size variable and the existing literal 10_000_000 to locate
where to change.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 00bf6ac0-1f9b-4bc7-91b9-295198d3f919
📒 Files selected for processing (53)
.github/workflows/ruby.yml.rubocop.ymlapi_examples/rest_api/delete_files_storage.rbapi_examples/rest_api/delete_files_uuid_metadata_key.rbapi_examples/rest_api/delete_files_uuid_storage.rbapi_examples/rest_api/delete_groups_uuid.rbapi_examples/rest_api/delete_webhooks_unsubscribe.rbapi_examples/rest_api/get_addons_aws_rekognition_detect_labels_execute_status.rbapi_examples/rest_api/get_addons_aws_rekognition_detect_moderation_labels_execute_status.rbapi_examples/rest_api/get_addons_remove_bg_execute_status.rbapi_examples/rest_api/get_addons_uc_clamav_virus_scan_execute_status.rbapi_examples/rest_api/get_convert_document_status_token.rbapi_examples/rest_api/get_convert_document_uuid.rbapi_examples/rest_api/get_convert_video_status_token.rbapi_examples/rest_api/get_files.rbapi_examples/rest_api/get_files_uuid.rbapi_examples/rest_api/get_files_uuid_metadata.rbapi_examples/rest_api/get_files_uuid_metadata_key.rbapi_examples/rest_api/get_groups.rbapi_examples/rest_api/get_groups_uuid.rbapi_examples/rest_api/get_project.rbapi_examples/rest_api/get_webhooks.rbapi_examples/rest_api/post_addons_aws_rekognition_detect_labels_execute.rbapi_examples/rest_api/post_addons_aws_rekognition_detect_moderation_labels_execute.rbapi_examples/rest_api/post_addons_remove_bg_execute.rbapi_examples/rest_api/post_addons_uc_clamav_virus_scan_execute.rbapi_examples/rest_api/post_convert_document.rbapi_examples/rest_api/post_convert_video.rbapi_examples/rest_api/post_files_local_copy.rbapi_examples/rest_api/post_files_remote_copy.rbapi_examples/rest_api/post_webhooks.rbapi_examples/rest_api/put_files_storage.rbapi_examples/rest_api/put_files_uuid_metadata_key.rbapi_examples/rest_api/put_files_uuid_storage.rbapi_examples/rest_api/put_webhooks_id.rbapi_examples/support/example_helper.rbapi_examples/support/run_rest_example.rbapi_examples/support/run_upload_example.rbapi_examples/upload_api/get_from_url_status.rbapi_examples/upload_api/get_group_info.rbapi_examples/upload_api/get_info.rbapi_examples/upload_api/post_from_url.rbapi_examples/upload_api/post_group.rbapi_examples/upload_api/post_multipart_complete.rbapi_examples/upload_api/post_multipart_start.rbapi_examples/upload_api/put_multipart_part.rbexamples/batch_upload.rbexamples/group_creation.rbexamples/large_file_upload.rbexamples/simple_upload.rbexamples/upload_with_progress.rbexamples/url_upload.rblib/uploadcare/configuration.rb
✅ Files skipped from review due to trivial changes (1)
- api_examples/upload_api/put_multipart_part.rb
🚧 Files skipped from review as they are similar to previous changes (24)
- api_examples/rest_api/get_convert_document_status_token.rb
- api_examples/rest_api/post_convert_document.rb
- api_examples/upload_api/post_multipart_complete.rb
- api_examples/rest_api/post_convert_video.rb
- api_examples/upload_api/get_from_url_status.rb
- api_examples/rest_api/get_convert_video_status_token.rb
- api_examples/rest_api/post_addons_remove_bg_execute.rb
- examples/large_file_upload.rb
- examples/url_upload.rb
- api_examples/rest_api/post_webhooks.rb
- api_examples/upload_api/post_multipart_start.rb
- examples/batch_upload.rb
- lib/uploadcare/configuration.rb
- api_examples/upload_api/get_info.rb
- api_examples/rest_api/get_groups_uuid.rb
- api_examples/rest_api/get_addons_aws_rekognition_detect_labels_execute_status.rb
- api_examples/rest_api/get_files_uuid_metadata_key.rb
- api_examples/rest_api/get_files_uuid_metadata.rb
- api_examples/support/example_helper.rb
- api_examples/rest_api/post_addons_uc_clamav_virus_scan_execute.rb
- examples/simple_upload.rb
- api_examples/upload_api/get_group_info.rb
- api_examples/rest_api/get_groups.rb
- api_examples/rest_api/get_files.rb
There was a problem hiding this comment.
Actionable comments posted: 5
♻️ Duplicate comments (11)
lib/uploadcare/internal/authenticator.rb (1)
41-48:⚠️ Potential issue | 🟠 MajorValidate keys before the simple-auth early return.
The early return on Line 43 bypasses both key validations for
Uploadcare.Simple, so blank credentials can pass through as malformed headers instead of raisingUploadcare::Exception::AuthErrorlocally.Suggested fix
def headers(http_method, uri, body = '', content_type = nil) resolved_content_type = content_type || 'application/json' + validate_public_key + raise Uploadcare::Exception::AuthError, 'Secret Key is blank.' if `@config.secret_key.to_s.empty`? return simple_auth_headers(resolved_content_type) if `@config.auth_type` == 'Uploadcare.Simple' - - raise Uploadcare::Exception::AuthError, 'Secret Key is blank.' if `@config.secret_key.to_s.empty`? - - validate_public_key secure_auth_headers(http_method, uri, body, resolved_content_type) endAlso applies to: 53-57, 60-64
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/uploadcare/internal/authenticator.rb` around lines 41 - 48, The simple-auth branch in headers currently returns simple_auth_headers(resolved_content_type) before key validation so blank credentials slip through; move or add the credential checks so validate_public_key and the secret key presence check run for auth_type 'Uploadcare.Simple' before returning from headers, and ensure simple_auth_headers is only called after those validations; apply the same change to the similar blocks that call simple_auth_headers (the other simple-auth early-return sites around the secure auth header code paths) so both validate_public_key and the secret-key blank check are always executed for Uploadcare.Simple.lib/uploadcare/internal/upload_io.rb (1)
34-45:⚠️ Potential issue | 🟠 MajorEnsure exception-safe tempfile cleanup in
wrap_stream.If Line 39, Line 40, or Line 44 raises, the tempfile created on Line 36 is not cleaned up. This can leak temp files/FDs across failures.
Suggested change
def self.wrap_stream(source, filename:) extension = ::File.extname(filename.to_s) tempfile = Tempfile.new(['uploadcare-upload', extension.empty? ? '.bin' : extension]) tempfile.binmode - - source.rewind if source.respond_to?(:rewind) - IO.copy_stream(source, tempfile) - tempfile.rewind - source.rewind if source.respond_to?(:rewind) - - new(tempfile, original_filename: filename, cleanup: true) + result = nil + begin + source.rewind if source.respond_to?(:rewind) + IO.copy_stream(source, tempfile) + tempfile.rewind + source.rewind if source.respond_to?(:rewind) + result = new(tempfile, original_filename: filename, cleanup: true) + ensure + tempfile.close! if result.nil? + end + result end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/uploadcare/internal/upload_io.rb` around lines 34 - 45, The wrap_stream method creates a Tempfile (tempfile) but may leak it if IO.copy_stream or rewinds raise; wrap the temp file usage in exception-safe logic: after creating tempfile, perform source.rewind/IO.copy_stream/tempfile.rewind inside a begin block and in an ensure block close and unlink the tempfile if an exception occurs before returning; alternatively use Tempfile.create/open with a block to auto-clean on error, but ensure that when you call new(tempfile, original_filename: filename, cleanup: true) you do not unlink the tempfile (only unlink on error), referencing the tempfile local and the wrap_stream method/new(...) call to locate where to add the begin/ensure or block form.lib/uploadcare/operations/multipart_upload.rb (2)
46-60:⚠️ Potential issue | 🟠 MajorResolve
part_sizebeforemultipart_start.When
config.multipart_chunk_sizeis overridden and the caller omitspart_size, the server still starts the session with its default chunking while the local reader usesconfig.multipart_chunk_size. That can leavestart_response['parts']out of sync with the chunks you upload.🔧 Proposed fix
- start_response = Uploadcare::Result.unwrap( + part_size = options.fetch(:part_size, config.multipart_chunk_size) + start_response = Uploadcare::Result.unwrap( upload_client.files.multipart_start( filename: filename, size: file_size, content_type: content_type, request_options: request_options, - **options + **options.merge(part_size: part_size) ) ) @@ - part_size = options.fetch(:part_size, config.multipart_chunk_size) threads = options.fetch(:threads, 1)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/uploadcare/operations/multipart_upload.rb` around lines 46 - 60, Compute the effective part_size before calling Uploadcare::Result.unwrap/upload_client.files.multipart_start so the server session is created with the same chunk size the client will use: fetch part_size = options.fetch(:part_size, config.multipart_chunk_size) prior to calling multipart_start and pass that part_size (via options or explicit param) into upload_client.files.multipart_start so start_response['parts'] matches the local reader's chunking (affecting multipart_start, part_size, config.multipart_chunk_size, and start_response['parts']).
62-65:⚠️ Potential issue | 🟠 MajorForward
request_optionsto every presigned part upload.
multipart_startandmultipart_completehonor the caller's timeouts/retries, but each PUT to a presigned part URL drops them. A single stalled part upload can still block the whole multipart flow even when the caller configured request timeouts.🔧 Proposed fix
- if threads > 1 - upload_parts_parallel(prepared_file, presigned_urls, part_size, threads, &block) + if threads > 1 + upload_parts_parallel(prepared_file, presigned_urls, part_size, threads, request_options, &block) else - upload_parts_sequential(prepared_file, presigned_urls, part_size, &block) + upload_parts_sequential(prepared_file, presigned_urls, part_size, request_options, &block) end @@ - def upload_parts_sequential(file, presigned_urls, part_size, &block) + def upload_parts_sequential(file, presigned_urls, part_size, request_options, &block) @@ - upload_client.upload_part_to_url(presigned_url, part_data) + upload_client.upload_part_to_url(presigned_url, part_data, request_options: request_options) @@ - def upload_parts_parallel(file, presigned_urls, part_size, threads, &block) + def upload_parts_parallel(file, presigned_urls, part_size, threads, request_options, &block) @@ - run_parallel_worker(queue, file_path, part_size, total_size, total_parts, mutex, uploaded, errors, &block) + run_parallel_worker(queue, file_path, part_size, total_size, total_parts, mutex, uploaded, errors, + request_options, &block) @@ - def run_parallel_worker(queue, file_path, part_size, total_size, total_parts, mutex, uploaded, errors, &block) + def run_parallel_worker(queue, file_path, part_size, total_size, total_parts, mutex, uploaded, errors, + request_options, &block) @@ - upload_client.upload_part_to_url(presigned_url, part_data) + upload_client.upload_part_to_url(presigned_url, part_data, request_options: request_options)Also applies to: 80-90, 96-145
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/uploadcare/operations/multipart_upload.rb` around lines 62 - 65, The per-part PUTs to presigned URLs are not using the caller's request_options, so individual part uploads ignore configured timeouts/retries; update the code paths that call upload_parts_parallel and upload_parts_sequential (and the internal per-part upload routine they invoke) to accept and forward the request_options argument into each presigned-part upload call so that every HTTP PUT uses the caller's timeouts/retries; specifically, modify upload_parts_parallel, upload_parts_sequential and the internal method that performs the PUT to take a request_options parameter and pass it through when calling the HTTP client for each presigned URL (mirror how multipart_start and multipart_complete use request_options).lib/uploadcare/api/upload.rb (1)
93-123:⚠️ Potential issue | 🟠 MajorMove payload materialization outside retry loop.
Line 101 reads
part_dataper attempt; for IO sources retries can upload empty bodies after first failure.Suggested fix
def upload_part_to_url(presigned_url, part_data, max_retries: 3) + data = part_data.respond_to?(:read) ? part_data.read : part_data retries = 0 begin uri = URI.parse(presigned_url) conn = Faraday.new(url: "#{uri.scheme}://#{uri.host}") do |f| f.adapter Faraday.default_adapter end - data = part_data.respond_to?(:read) ? part_data.read : part_data - response = conn.put(uri.request_uri) do |req| req.headers['Content-Type'] = 'application/octet-stream' req.body = data end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/uploadcare/api/upload.rb` around lines 93 - 123, The upload_part_to_url method reads part_data inside the retry loop which can consume/empty IO streams on the first attempt; move the payload materialization out of the retry block by reading part_data once into a local buffer (e.g., data = part_data.read if part_data.respond_to?(:read) else part_data) before the begin/rescue so each retry uses the same in-memory payload; update references in upload_part_to_url to use that buffered data and keep retry/sleep/exception logic unchanged.lib/uploadcare/internal/error_handler.rb (1)
39-42:⚠️ Potential issue | 🟠 MajorGuard non-Hash payloads in
extract_error_message.Line 41 assumes a Hash; array/scalar JSON bodies can raise and bypass intended typed exception mapping.
Suggested fix
def extract_error_message(response) - parsed = JSON.parse(response[:body].to_s) - parsed['detail'] || parsed.map { |k, v| "#{k}: #{v}" }.join('; ') + raw_body = response[:body].to_s + parsed = JSON.parse(raw_body) + + case parsed + when Hash + parsed['detail'] || parsed['error'] || parsed.map { |k, v| "#{k}: #{v}" }.join('; ') + when Array + parsed.join('; ') + else + raw_body + end rescue JSON::ParserError response[:body].to_s end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/uploadcare/internal/error_handler.rb` around lines 39 - 42, The extract_error_message method assumes JSON.parse(response[:body]) returns a Hash and calls parsed['detail']/map, which fails for arrays or scalar values; update extract_error_message to guard parsed with e.g. parsed.is_a?(Hash) before accessing ['detail'] or mapping, handle Array by joining elements (or converting to string), handle scalar by to_s, and fall back to response[:body].to_s for non-JSON or unexpected types so typed exception mapping won't break.lib/uploadcare/resources/group.rb (1)
36-45:⚠️ Potential issue | 🟠 MajorForward
request_optionsinto the paginated collection.
next_page/allcannot preserve initial per-request settings unless this constructor includesrequest_options.Suggested fix
Uploadcare::Collections::Paginated.new( resources: groups, next_page: response['next'], previous_page: response['previous'], per_page: response['per_page'], total: response['total'], api_client: resolved_client.api.rest.groups, resource_class: self, - client: resolved_client + client: resolved_client, + request_options: request_options )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/uploadcare/resources/group.rb` around lines 36 - 45, The Paginated constructor call in Uploadcare::Collections::Paginated.new is missing the request_options argument so next_page/all lose per-request settings; update the call in group resource creation to forward the current request_options (e.g. add request_options: request_options) to the Paginated initializer so the collection preserves the original per-request settings when fetching additional pages.lib/uploadcare/resources/video_conversion.rb (1)
20-24:⚠️ Potential issue | 🟠 MajorMake
:qualityoptional and append it conditionally.This still rejects valid conversions that rely on Uploadcare’s default video quality.
In Uploadcare video conversion/encoding API, is the quality operation optional, and what default quality is applied when omitted?Suggested fix
def self.convert(params:, options: {}, client: nil, config: Uploadcare.configuration, request_options: {}) raise ArgumentError, 'params must include :uuid' unless params[:uuid] raise ArgumentError, 'params must include :format' unless params[:format] - raise ArgumentError, 'params must include :quality' unless params[:quality] paths = Array(params[:uuid]).map do |uuid| - "#{uuid}/video/-/format/#{params[:format]}/-/quality/#{params[:quality]}/" + path = "#{uuid}/video/-/format/#{params[:format]}/" + path += "-/quality/#{params[:quality]}/" if params[:quality] + path end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/uploadcare/resources/video_conversion.rb` around lines 20 - 24, Remove the strict validation that requires params[:quality] and instead conditionally append the quality segment when present: in the block that builds paths (where Array(params[:uuid]).map is used), stop raising ArgumentError for :quality and change the interpolated string to include "/-/quality/#{params[:quality]}/" only when params.key?(:quality) && params[:quality].present? (otherwise omit that segment), ensuring the rest of the path ("/video/-/format/#{params[:format]}/") remains unchanged so default Uploadcare quality is used when no quality is supplied.lib/uploadcare/api/upload/files.rb (1)
47-55:⚠️ Potential issue | 🔴 CriticalClose partially prepared wrappers on error paths.
If wrapping fails on a later file, earlier wrapped entries are leaked because
prepared_filesis never fully assigned.Suggested fix
- prepared_files = files.map { |file| Uploadcare::Internal::UploadIo.wrap(file) } params = Uploadcare::Internal::UploadParamsGenerator.call( options: options, config: upload.config ) - prepared_files.each { |file| form_data_for(file, params) } + prepared_files = [] + files.each do |file| + prepared = Uploadcare::Internal::UploadIo.wrap(file) + prepared_files << prepared + form_data_for(prepared, params) + end Uploadcare::Result.unwrap(upload.post(path: '/base/', params: params, request_options: request_options))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/uploadcare/api/upload/files.rb` around lines 47 - 55, The current use of files.map { Uploadcare::Internal::UploadIo.wrap(file) } can raise part-way and leave earlier wrapped objects unassigned and leaked; initialize prepared_files = [] before wrapping and replace the map with an explicit files.each loop that wraps each file, pushes the result into prepared_files (e.g., wrapped = Uploadcare::Internal::UploadIo.wrap(file); prepared_files << wrapped), then call form_data_for for each wrapped entry and proceed to Uploadcare::Result.unwrap(upload.post(...)); keep the existing ensure block that calls prepared_files&.each(&:close!) so any partially prepared wrappers are closed on error.lib/uploadcare/collections/paginated.rb (1)
61-70:⚠️ Potential issue | 🟠 MajorPreserve
request_optionsacross pagination calls.The collection drops per-request options after page 1, so
next_page/allcan silently lose custom timeout/headers/retry settings.Suggested fix
class Uploadcare::Collections::Paginated include Enumerable @@ # `@return` [Uploadcare::Client, nil] Client for resource instantiation attr_reader :client + # `@return` [Hash] Request options reused for follow-up pages + attr_reader :request_options @@ def initialize(params = {}) @@ `@resource_class` = params[:resource_class] `@client` = params[:client] + `@request_options` = params[:request_options] || {} end @@ def fetch_response(params) - Uploadcare::Result.unwrap(api_client.list(params: params)) + Uploadcare::Result.unwrap(api_client.list(params: params, request_options: request_options)) end @@ self.class.new( @@ api_client: api_client, resource_class: resource_class, - client: client + client: client, + request_options: request_options ) endAlso applies to: 124-140
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/uploadcare/collections/paginated.rb` around lines 61 - 70, The collection initializer drops per-request options so subsequent pagination calls lose custom timeout/headers/retry settings; update initialize to accept params[:request_options] and store it in an instance variable (e.g. `@request_options`) and then ensure pagination helpers (methods like next_page, previous_page, next, previous, and all) pass this `@request_options` along when constructing or invoking API requests for subsequent pages so request-level options persist across pages; also apply the same change to the other constructor/assignment site referenced around the second block (the same pattern at the alternate initialize/assignment) so both code paths preserve request_options.lib/uploadcare/client.rb (1)
25-32:⚠️ Potential issue | 🟠 MajorClone config even when no overrides are passed.
Line 27 still aliases the original config object, so later config mutation can bleed across clients (including
withwith empty options).Suggested fix
def initialize(config: nil, **options) base_config = config || Uploadcare.configuration - `@config` = options.empty? ? base_config : base_config.with(**options) + `@config` = base_config.with(**options) end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/uploadcare/client.rb` around lines 25 - 32, The current initialize method assigns the original config object directly (via base_config), allowing mutations to bleed across clients; fix by duplicating the base configuration immediately (e.g. base_config = (config || Uploadcare.configuration).dup) before assigning `@config` so even when options is empty the client gets a cloned config; keep the existing logic for applying overrides (use base_config.with(**options) when options present) and ensure the with(**options) factory still calls self.class.new(config: config, **options) so initialize’s duplication handles cloning consistently.
🧹 Nitpick comments (3)
.rubocop.yml (1)
82-158: Avoid disabling most RSpec cops globally.Line 82–Line 158 turns off a large set of RSpec cops for the whole repo, which weakens lint protection for new specs. Prefer scoping disables to legacy paths or moving exceptions into a generated
.rubocop_todo.yml.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.rubocop.yml around lines 82 - 158, The current change disables many RSpec cops (e.g., RSpec/MultipleExpectations, RSpec/ExampleLength, RSpec/DescribedClass, RSpec/AnyInstance, RSpec/BeEq, etc.) globally; instead revert these global "Enabled: false" settings and move only the genuinely legacy failures into a generated .rubocop_todo.yml or scoped disables, and for any remaining necessary exceptions scope them to specific directories or files (e.g., add per-directory RuboCop config or use Exclude for legacy spec paths) so the listed cops are re-enabled for new tests while only legacy specs retain the relaxed rules.lib/uploadcare/api/rest/file_metadata.rb (2)
51-55: Make the JSON dependency explicit inupdate.
value.to_jsondepends on the JSON extension having been loaded earlier. This file only requiresuri, soUploadcare::Api::Rest::FileMetadata#updatecurrently inherits a load-order dependency from elsewhere in the gem.♻️ Proposed change
-require 'uri' +require 'json' +require 'uri' @@ - rest.put(path: "/files/#{encoded_uuid}/metadata/#{encoded_key}/", params: value.to_json, headers: {}, + rest.put(path: "/files/#{encoded_uuid}/metadata/#{encoded_key}/", params: JSON.generate(value), headers: {}, request_options: request_options)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/uploadcare/api/rest/file_metadata.rb` around lines 51 - 55, The update method in Uploadcare::Api::Rest::FileMetadata relies on value.to_json but the file only requires 'uri', creating a load-order dependency; fix it by explicitly requiring the JSON library at the top of this file (or replace value.to_json with JSON.generate(value)) so JSON encoding is guaranteed; update the Uploadcare::Api::Rest::FileMetadata#update method to use the explicit JSON dependency (either via a top-level require 'json' or JSON.generate) to remove the implicit load-order requirement.
37-40: Encode metadata keys as path segments, not form components.
URI.encode_www_form_componentapplies query/form rules and turns spaces into+, butkeyis interpolated into the URL path. If metadata keys are not restricted to already-safe characters,show/update/deletewill address the wrong metadata entry.Also applies to: 52-54, 66-68
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/uploadcare/api/rest/file_metadata.rb` around lines 37 - 40, The code is using URI.encode_www_form_component for path segments which encodes spaces as '+' (query/form style); change the encoding for path interpolation in file_metadata methods (the places using encoded_uuid/encoded_key in rest.get/post/put/delete — e.g., in show/update/delete methods) to use a percent-encoding suitable for URL path segments (for example URI::DEFAULT_PARSER.escape or equivalent percent-encoding) for both uuid and key so spaces become %20 and other characters are correctly escaped for path segments rather than form components. Ensure you update all occurrences mentioned (around lines with rest.get/... for metadata paths and the other two locations) so the key is consistently encoded as a path segment.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@lib/uploadcare/api/rest/document_conversions.rb`:
- Around line 54-59: normalize_bool_param currently coerces any truthy value to
'1', causing strings like "False" or "no" to be flipped; update
normalize_bool_param so it only maps the explicit accepted inputs (true, 1, '1',
'true' => '1' and false, 0, '0', 'false' => '0') and for any other value return
nil (or the original value) instead of coercing to '1' so unknown/ambiguous
inputs aren't silently inverted; locate the normalize_bool_param method and
change the else branch to return nil (or leave the value unchanged) so callers
can handle invalid/ambiguous inputs explicitly.
In `@lib/uploadcare/internal/upload_io.rb`:
- Around line 20-22: The path_backed? method currently treats any existing
filesystem path (including directories) as a valid upload source; change its
check to only accept regular readable files by replacing
::File.exist?(source.path) with ::File.file?(source.path) and optionally also
::File.readable?(source.path) so path_backed? returns true only when
source.respond_to?(:path) && source.path && ::File.file?(source.path) (&&
::File.readable?(source.path)).
In `@lib/uploadcare/resources/file_metadata.rb`:
- Around line 19-24: The index method currently assigns API response into
`@metadata` preserving original key types, and show does not write fetched values
back, causing symbol-keyed or stale data for the []/[]= string-keyed accessor;
update index to normalize all keys to strings (e.g., transform response Hash
keys to strings) before assigning to `@metadata` and ensure show (the method that
fetches a single key) writes the fetched value back into `@metadata` using the
stringified key so subsequent self[:key] reads are consistent with []/[]=
behavior; reference the methods index, show, `@metadata`, and the []/[]= accessors
when making these changes.
In `@lib/uploadcare/resources/file.rb`:
- Around line 8-13: Update the YARD examples to use the client-first entrypoint
instead of calling Uploadcare::File directly: replace usages of
Uploadcare::File.find and Uploadcare::File.upload with the corresponding
Uploadcare::Client-based calls (create a client instance and call its file/find
and file/upload entrypoints or whatever client methods expose find/upload),
preserving the shown behaviors (e.g., accessing original_filename and store:
true) so docs teach the new public surface.
---
Duplicate comments:
In `@lib/uploadcare/api/upload.rb`:
- Around line 93-123: The upload_part_to_url method reads part_data inside the
retry loop which can consume/empty IO streams on the first attempt; move the
payload materialization out of the retry block by reading part_data once into a
local buffer (e.g., data = part_data.read if part_data.respond_to?(:read) else
part_data) before the begin/rescue so each retry uses the same in-memory
payload; update references in upload_part_to_url to use that buffered data and
keep retry/sleep/exception logic unchanged.
In `@lib/uploadcare/api/upload/files.rb`:
- Around line 47-55: The current use of files.map {
Uploadcare::Internal::UploadIo.wrap(file) } can raise part-way and leave earlier
wrapped objects unassigned and leaked; initialize prepared_files = [] before
wrapping and replace the map with an explicit files.each loop that wraps each
file, pushes the result into prepared_files (e.g., wrapped =
Uploadcare::Internal::UploadIo.wrap(file); prepared_files << wrapped), then call
form_data_for for each wrapped entry and proceed to
Uploadcare::Result.unwrap(upload.post(...)); keep the existing ensure block that
calls prepared_files&.each(&:close!) so any partially prepared wrappers are
closed on error.
In `@lib/uploadcare/client.rb`:
- Around line 25-32: The current initialize method assigns the original config
object directly (via base_config), allowing mutations to bleed across clients;
fix by duplicating the base configuration immediately (e.g. base_config =
(config || Uploadcare.configuration).dup) before assigning `@config` so even when
options is empty the client gets a cloned config; keep the existing logic for
applying overrides (use base_config.with(**options) when options present) and
ensure the with(**options) factory still calls self.class.new(config: config,
**options) so initialize’s duplication handles cloning consistently.
In `@lib/uploadcare/collections/paginated.rb`:
- Around line 61-70: The collection initializer drops per-request options so
subsequent pagination calls lose custom timeout/headers/retry settings; update
initialize to accept params[:request_options] and store it in an instance
variable (e.g. `@request_options`) and then ensure pagination helpers (methods
like next_page, previous_page, next, previous, and all) pass this
`@request_options` along when constructing or invoking API requests for subsequent
pages so request-level options persist across pages; also apply the same change
to the other constructor/assignment site referenced around the second block (the
same pattern at the alternate initialize/assignment) so both code paths preserve
request_options.
In `@lib/uploadcare/internal/authenticator.rb`:
- Around line 41-48: The simple-auth branch in headers currently returns
simple_auth_headers(resolved_content_type) before key validation so blank
credentials slip through; move or add the credential checks so
validate_public_key and the secret key presence check run for auth_type
'Uploadcare.Simple' before returning from headers, and ensure
simple_auth_headers is only called after those validations; apply the same
change to the similar blocks that call simple_auth_headers (the other
simple-auth early-return sites around the secure auth header code paths) so both
validate_public_key and the secret-key blank check are always executed for
Uploadcare.Simple.
In `@lib/uploadcare/internal/error_handler.rb`:
- Around line 39-42: The extract_error_message method assumes
JSON.parse(response[:body]) returns a Hash and calls parsed['detail']/map, which
fails for arrays or scalar values; update extract_error_message to guard parsed
with e.g. parsed.is_a?(Hash) before accessing ['detail'] or mapping, handle
Array by joining elements (or converting to string), handle scalar by to_s, and
fall back to response[:body].to_s for non-JSON or unexpected types so typed
exception mapping won't break.
In `@lib/uploadcare/internal/upload_io.rb`:
- Around line 34-45: The wrap_stream method creates a Tempfile (tempfile) but
may leak it if IO.copy_stream or rewinds raise; wrap the temp file usage in
exception-safe logic: after creating tempfile, perform
source.rewind/IO.copy_stream/tempfile.rewind inside a begin block and in an
ensure block close and unlink the tempfile if an exception occurs before
returning; alternatively use Tempfile.create/open with a block to auto-clean on
error, but ensure that when you call new(tempfile, original_filename: filename,
cleanup: true) you do not unlink the tempfile (only unlink on error),
referencing the tempfile local and the wrap_stream method/new(...) call to
locate where to add the begin/ensure or block form.
In `@lib/uploadcare/operations/multipart_upload.rb`:
- Around line 46-60: Compute the effective part_size before calling
Uploadcare::Result.unwrap/upload_client.files.multipart_start so the server
session is created with the same chunk size the client will use: fetch part_size
= options.fetch(:part_size, config.multipart_chunk_size) prior to calling
multipart_start and pass that part_size (via options or explicit param) into
upload_client.files.multipart_start so start_response['parts'] matches the local
reader's chunking (affecting multipart_start, part_size,
config.multipart_chunk_size, and start_response['parts']).
- Around line 62-65: The per-part PUTs to presigned URLs are not using the
caller's request_options, so individual part uploads ignore configured
timeouts/retries; update the code paths that call upload_parts_parallel and
upload_parts_sequential (and the internal per-part upload routine they invoke)
to accept and forward the request_options argument into each presigned-part
upload call so that every HTTP PUT uses the caller's timeouts/retries;
specifically, modify upload_parts_parallel, upload_parts_sequential and the
internal method that performs the PUT to take a request_options parameter and
pass it through when calling the HTTP client for each presigned URL (mirror how
multipart_start and multipart_complete use request_options).
In `@lib/uploadcare/resources/group.rb`:
- Around line 36-45: The Paginated constructor call in
Uploadcare::Collections::Paginated.new is missing the request_options argument
so next_page/all lose per-request settings; update the call in group resource
creation to forward the current request_options (e.g. add request_options:
request_options) to the Paginated initializer so the collection preserves the
original per-request settings when fetching additional pages.
In `@lib/uploadcare/resources/video_conversion.rb`:
- Around line 20-24: Remove the strict validation that requires params[:quality]
and instead conditionally append the quality segment when present: in the block
that builds paths (where Array(params[:uuid]).map is used), stop raising
ArgumentError for :quality and change the interpolated string to include
"/-/quality/#{params[:quality]}/" only when params.key?(:quality) &&
params[:quality].present? (otherwise omit that segment), ensuring the rest of
the path ("/video/-/format/#{params[:format]}/") remains unchanged so default
Uploadcare quality is used when no quality is supplied.
---
Nitpick comments:
In @.rubocop.yml:
- Around line 82-158: The current change disables many RSpec cops (e.g.,
RSpec/MultipleExpectations, RSpec/ExampleLength, RSpec/DescribedClass,
RSpec/AnyInstance, RSpec/BeEq, etc.) globally; instead revert these global
"Enabled: false" settings and move only the genuinely legacy failures into a
generated .rubocop_todo.yml or scoped disables, and for any remaining necessary
exceptions scope them to specific directories or files (e.g., add per-directory
RuboCop config or use Exclude for legacy spec paths) so the listed cops are
re-enabled for new tests while only legacy specs retain the relaxed rules.
In `@lib/uploadcare/api/rest/file_metadata.rb`:
- Around line 51-55: The update method in Uploadcare::Api::Rest::FileMetadata
relies on value.to_json but the file only requires 'uri', creating a load-order
dependency; fix it by explicitly requiring the JSON library at the top of this
file (or replace value.to_json with JSON.generate(value)) so JSON encoding is
guaranteed; update the Uploadcare::Api::Rest::FileMetadata#update method to use
the explicit JSON dependency (either via a top-level require 'json' or
JSON.generate) to remove the implicit load-order requirement.
- Around line 37-40: The code is using URI.encode_www_form_component for path
segments which encodes spaces as '+' (query/form style); change the encoding for
path interpolation in file_metadata methods (the places using
encoded_uuid/encoded_key in rest.get/post/put/delete — e.g., in
show/update/delete methods) to use a percent-encoding suitable for URL path
segments (for example URI::DEFAULT_PARSER.escape or equivalent percent-encoding)
for both uuid and key so spaces become %20 and other characters are correctly
escaped for path segments rather than form components. Ensure you update all
occurrences mentioned (around lines with rest.get/... for metadata paths and the
other two locations) so the key is consistently encoded as a path segment.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: c2646668-eca3-40a3-b772-13cdba0ef153
📒 Files selected for processing (45)
.github/workflows/ruby.yml.rubocop.ymllib/uploadcare/api/rest.rblib/uploadcare/api/rest/addons.rblib/uploadcare/api/rest/document_conversions.rblib/uploadcare/api/rest/file_metadata.rblib/uploadcare/api/rest/files.rblib/uploadcare/api/rest/groups.rblib/uploadcare/api/rest/project.rblib/uploadcare/api/rest/video_conversions.rblib/uploadcare/api/rest/webhooks.rblib/uploadcare/api/upload.rblib/uploadcare/api/upload/files.rblib/uploadcare/api/upload/groups.rblib/uploadcare/client.rblib/uploadcare/client/addons_accessor.rblib/uploadcare/client/api.rblib/uploadcare/client/conversions_accessor.rblib/uploadcare/client/document_conversions_accessor.rblib/uploadcare/client/file_metadata_accessor.rblib/uploadcare/client/files_accessor.rblib/uploadcare/client/groups_accessor.rblib/uploadcare/client/project_accessor.rblib/uploadcare/client/video_conversions_accessor.rblib/uploadcare/client/webhooks_accessor.rblib/uploadcare/collections/batch_result.rblib/uploadcare/collections/paginated.rblib/uploadcare/internal/authenticator.rblib/uploadcare/internal/error_handler.rblib/uploadcare/internal/signature_generator.rblib/uploadcare/internal/throttle_handler.rblib/uploadcare/internal/upload_io.rblib/uploadcare/internal/upload_params_generator.rblib/uploadcare/internal/user_agent.rblib/uploadcare/operations/multipart_upload.rblib/uploadcare/operations/upload_router.rblib/uploadcare/resources/addon_execution.rblib/uploadcare/resources/base_resource.rblib/uploadcare/resources/document_conversion.rblib/uploadcare/resources/file.rblib/uploadcare/resources/file_metadata.rblib/uploadcare/resources/group.rblib/uploadcare/resources/project.rblib/uploadcare/resources/video_conversion.rblib/uploadcare/resources/webhook.rb
🚧 Files skipped from review as they are similar to previous changes (14)
- lib/uploadcare/resources/project.rb
- lib/uploadcare/client/document_conversions_accessor.rb
- lib/uploadcare/api/rest/webhooks.rb
- lib/uploadcare/internal/user_agent.rb
- lib/uploadcare/api/rest/groups.rb
- lib/uploadcare/client/groups_accessor.rb
- lib/uploadcare/client/video_conversions_accessor.rb
- lib/uploadcare/resources/document_conversion.rb
- lib/uploadcare/resources/webhook.rb
- lib/uploadcare/collections/batch_result.rb
- lib/uploadcare/client/project_accessor.rb
- lib/uploadcare/client/files_accessor.rb
- lib/uploadcare/internal/upload_params_generator.rb
- lib/uploadcare/api/rest/addons.rb
There was a problem hiding this comment.
Actionable comments posted: 5
♻️ Duplicate comments (5)
lib/uploadcare/operations/multipart_upload.rb (2)
46-60:⚠️ Potential issue | 🟠 MajorResolve
part_sizebeforemultipart_startto keep server/local chunking consistent.
multipart_startis still called before resolvingpart_size, so API part planning can diverge from local reads when config/custom options are used.Proposed fix
+ part_size = options.fetch(:part_size, config.multipart_chunk_size) start_response = Uploadcare::Result.unwrap( upload_client.files.multipart_start( filename: filename, size: file_size, content_type: content_type, request_options: request_options, - **options + **options.merge(part_size: part_size) ) ) @@ - part_size = options.fetch(:part_size, config.multipart_chunk_size) threads = options.fetch(:threads, 1)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/uploadcare/operations/multipart_upload.rb` around lines 46 - 60, Compute the effective part_size before calling Uploadcare::Result.unwrap(upload_client.files.multipart_start(...)) so the server's planned parts match local reads: resolve part_size using options.fetch(:part_size, config.multipart_chunk_size) and threads similarly prior to invoking upload_client.files.multipart_start, then pass the resolved part_size (and threads if needed) into the multipart_start call (or include it in the options hash) and use those same symbols (part_size, threads, start_response, presigned_urls, uuid) downstream so local chunking and the API plan remain consistent.
62-66:⚠️ Potential issue | 🟠 MajorPropagate
request_optionsthrough both part-upload paths.Per-part uploads still drop
request_options, so configured timeouts/retries may not apply to presigned PUTs.Proposed fix
- upload_parts_parallel(prepared_file, presigned_urls, part_size, threads, &block) + upload_parts_parallel(prepared_file, presigned_urls, part_size, threads, request_options, &block) else - upload_parts_sequential(prepared_file, presigned_urls, part_size, &block) + upload_parts_sequential(prepared_file, presigned_urls, part_size, request_options, &block) end @@ - def upload_parts_sequential(file, presigned_urls, part_size, &block) + def upload_parts_sequential(file, presigned_urls, part_size, request_options, &block) @@ - upload_client.upload_part_to_url(presigned_url, part_data) + upload_client.upload_part_to_url(presigned_url, part_data, request_options: request_options) @@ - def upload_parts_parallel(file, presigned_urls, part_size, threads, &block) + def upload_parts_parallel(file, presigned_urls, part_size, threads, request_options, &block) @@ - run_parallel_worker(queue, file_path, part_size, total_size, total_parts, mutex, uploaded, errors, &block) + run_parallel_worker( + queue, file_path, part_size, total_size, total_parts, + mutex, uploaded, errors, request_options, &block + ) @@ - def run_parallel_worker(queue, file_path, part_size, total_size, total_parts, mutex, uploaded, errors, &block) + def run_parallel_worker(queue, file_path, part_size, total_size, total_parts, mutex, uploaded, errors, request_options, &block) @@ - upload_client.upload_part_to_url(presigned_url, part_data) + upload_client.upload_part_to_url(presigned_url, part_data, request_options: request_options)Also applies to: 80-96, 96-116, 118-133
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/uploadcare/operations/multipart_upload.rb` around lines 62 - 66, The per-part upload paths are not receiving the request_options so presigned PUTs ignore configured timeouts/retries; update the calls to upload_parts_parallel(prepared_file, presigned_urls, part_size, threads, &block) and upload_parts_sequential(prepared_file, presigned_urls, part_size, &block) to include request_options, update those method signatures (upload_parts_parallel and upload_parts_sequential) to accept request_options, and ensure the request_options are forwarded into the actual presigned PUT HTTP calls (the code that performs the PUT for each part) so timeouts/retries are applied; repeat the same propagation for the other per-part upload call sites referenced (the additional upload_parts_* usages).lib/uploadcare/api/rest/document_conversions.rb (1)
55-60:⚠️ Potential issue | 🟡 MinorStop coercing ambiguous boolean values to
'1'.Line 59 silently flips unknown truthy inputs (e.g.,
"False","no") to'1', which can invert API intent.Suggested fix
def normalize_bool_param(value) - case value - when true, 1, '1', 'true' then '1' - when false, 0, '0', 'false' then '0' - else value ? '1' : '0' - end + normalized = value.is_a?(String) ? value.strip.downcase : value + case normalized + when true, 1, '1', 'true' then '1' + when false, 0, '0', 'false' then '0' + else + raise ArgumentError, "Unsupported boolean value: #{value.inspect}" + end end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/uploadcare/api/rest/document_conversions.rb` around lines 55 - 60, The normalize_bool_param method is coercing unknown truthy inputs to '1' which can invert intent; update normalize_bool_param so it only converts explicitly recognized truthy (true, 1, '1', 'true') to '1' and explicitly recognized falsy (false, 0, '0', 'false') to '0', but for any other input return the original value unchanged (do not default to '1'); ensure the matching accounts for string case-insensitivity where appropriate and keep the function name normalize_bool_param as the location to change.lib/uploadcare/resources/file.rb (2)
8-13:⚠️ Potential issue | 🟡 MinorYARD examples still advertise the legacy entrypoint.
If docs are regenerated from this file, Lines 8-13 immediately drift back to
Uploadcare::File.*instead of the client-first surface this rewrite is introducing.💡 Proposed fix
# `@example` Finding a file -# file = Uploadcare::File.find(uuid: "file-uuid") +# client = Uploadcare.client +# file = client.files.find(uuid: "file-uuid") # file.original_filename # => "photo.jpg" # # `@example` Uploading -# file = Uploadcare::File.upload(File.open("photo.jpg"), store: true) +# client = Uploadcare.client +# file = client.files.upload(::File.open("photo.jpg"), store: true)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/uploadcare/resources/file.rb` around lines 8 - 13, The YARD examples still reference the old class-level entrypoints Uploadcare::File.find and Uploadcare::File.upload; update those examples to use the new client-first surface (e.g., replace Uploadcare::File.find(uuid: "file-uuid") with client.files.find(uuid: "file-uuid") and replace Uploadcare::File.upload(File.open("photo.jpg"), store: true) with client.files.upload(File.open("photo.jpg"), store: true)), ensuring the examples show obtaining a client instance (client = Uploadcare::Client.new(...)) if needed.
288-303:⚠️ Potential issue | 🟠 Major
cdn_urlis returning the REST URL here.Line 301 returns
@url, which is the file-info endpoint on normal payloads, not the CDN URL. Line 291 also treats@original_file_urlas a UUID source without checking that it belongs to Uploadcare, so local-copy payloads can synthesize bogus UUIDs from arbitrary upstream URLs. Please constrain UUID extraction to Uploadcare-owned URL shapes and only surface@original_file_urlascdn_urlwhen it matchesconfig.cdn_base.Uploadcare REST API file info response fields `url` vs `original_file_url`, and the `POST /files/local_copy/` response shape for those fields.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/uploadcare/resources/file.rb` around lines 288 - 303, The cdn_url and uuid logic is returning REST/endpoints and extracting UUIDs from arbitrary upstream URLs; update File#cdn_url to never return `@url` (the REST file-info endpoint) and instead build the CDN URL only from a validated UUID or from `@original_file_url` only when that URL begins with config.cdn_base, and tighten File#uuid to extract the UUID only from Uploadcare-owned URL shapes (e.g., require config.cdn_base or known Uploadcare host pattern before applying the UUID regex to `@url` or `@original_file_url`); update references to `@url`, `@original_file_url`, uuid, cdn_url and config.cdn_base accordingly so local-copy payloads cannot synthesize bogus UUIDs and cdn_url always yields a proper CDN address.
🧹 Nitpick comments (6)
.rubocop.yml (1)
93-160: Consider using a.rubocop_todo.ymlbaseline instead of permanently disabling RSpec cops globally.Disabling 23 RSpec cops in
.rubocop.ymlremoves valuable test-quality signal. Use a generated.rubocop_todo.ymlbaseline (scoped/temporary) and re-enable high-signal cops incrementally.Suggested approach
+inherit_from: + - .rubocop_todo.yml ... -RSpec/AnyInstance: - Enabled: false -RSpec/VerifiedDoubles: - Enabled: false -RSpec/MessageSpies: - Enabled: false +# Keep high-signal cops enabled long-term; track remaining offenses in .rubocop_todo.yml🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.rubocop.yml around lines 93 - 160, Replace the blanket disabling of many RSpec cops in .rubocop.yml with a generated baseline: run RuboCop's auto-gen to create a .rubocop_todo.yml (rubocop --auto-gen-config), remove the 23 RSpec cop "Enabled: false" entries (e.g., RSpec/MultipleExpectations, RSpec/ExampleLength, RSpec/DescribedClass, RSpec/AnyInstance, etc.) from .rubocop.yml, add an include for the generated .rubocop_todo.yml so the baseline is applied, and then re-enable high-signal cops incrementally by removing them from the todo file and fixing offenses for symbols like RSpec/VerifiedDoubles, RSpec/ReceiveMessages, RSpec/IteratedExpectation, etc.lib/uploadcare/exception/throttle_error.rb (1)
7-8: Sync YARD params with the new initializer signature.Line 8 accepts
messageandtimeout:, but the docblock only documentstimeout. Add@param message [String, nil]to keep generated docs accurate.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/uploadcare/exception/throttle_error.rb` around lines 7 - 8, The YARD docblock for ThrottleError#initialize is missing documentation for the first parameter; update the comment above initialize to include "@param message [String, nil] Message for the error" in addition to the existing "@param timeout [Float]" so the doc matches the method signature (initialize(message = nil, timeout: 10.0)).README.md (1)
447-450: Verify video conversion return type consistency.The video conversion example accesses
job.result(Line 449) while the document conversion example accessesjob.fetch("result")(Line 442). This suggests different return types, which is documented on Lines 452-454. However, this asymmetry may confuse users.Consider adding inline comments in the code examples to clarify this difference, or alternatively, ensure the video conversion
convertmethod also returns a hash for consistency.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 447 - 450, The examples are inconsistent: video conversion example uses job.result while document conversion uses job.fetch("result"), which can confuse readers; update the README examples (referencing job.result, job.fetch("result"), and the convert method) to include a brief inline comment explaining that the video conversion client returns an object with a .result accessor while the document conversion returns a hash accessed via fetch, or alternatively change the video convert implementation to return a hash for parity—prefer the README change: add one-line comments beside the video example explaining the return shape and how to access the token so readers understand the difference.lib/uploadcare/collections/paginated.rb (1)
102-112: Consider adding a limit parameter toallto prevent unbounded fetches.The
allmethod fetches all pages without any limit. For collections with many thousands of items, this could lead to memory exhaustion or very long execution times. Consider adding an optionallimit:parameter or documenting this behavior prominently.♻️ Optional enhancement
- def all + def all(limit: nil) collection = self items = [] while collection items.concat(collection.resources || []) + break if limit && items.size >= limit collection = collection.next_page end + limit ? items.first(limit) : items - items end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/uploadcare/collections/paginated.rb` around lines 102 - 112, The all method unboundedly iterates pages (using collection.next_page and concatenating collection.resources) which can exhaust memory; add an optional limit: parameter (default nil) to all, track a fetched_count, and stop paging when fetched_count >= limit; when concatenating a page, only append up to (limit - fetched_count) items (and return early), handle non-positive limits by returning [] immediately, and update the method signature and docstring to reflect the new limit behavior.api_examples/support/run_upload_example.rb (1)
60-67: Consider extracting file handle management to helper.The pattern of opening a file, performing an operation, and ensuring cleanup is repeated. Consider extracting this to
ExampleHelper.with_fixture_file(name)for consistency with otherwith_*helpers in the codebase.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api_examples/support/run_upload_example.rb` around lines 60 - 67, Extract the open/ensure/remove pattern into a new ExampleHelper.with_fixture_file(name) helper that opens ExampleHelper.fixture_path(name) in binary mode, yields the file handle to a block, and guarantees closing the handle and calling ExampleHelper.safe_delete_file(ExampleHelper.uploaded_uuid_from_base_response(response)) when a response exists; then replace run_base_upload to call ExampleHelper.with_fixture_file('kitten.jpeg') { |handle| ExampleHelper.unwrap(client.api.upload.files.direct(file: handle, store: true)) } so run_base_upload no longer manages closing or cleanup directly.api_examples/support/example_helper.rb (1)
12-12: Avoid third-party media in the URL-upload example.Line 12 hard-codes
samplelib.comas the source for a verified example. A host-side move, block, or rate limit will break this flow for reasons unrelated to the SDK, so this should live on an Uploadcare-controlled or repo-controlled URL instead.Also applies to: 96-100
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api_examples/support/example_helper.rb` at line 12, The example hard-codes an external third-party URL in the SAMPLE_VIDEO_URL constant which can break tests if that host moves or blocks content; replace SAMPLE_VIDEO_URL with a repo-controlled or Uploadcare-controlled stable URL (e.g., an asset committed to the repo or an official Uploadcare demo URL) and update any other occurrences referenced in the example block (the same pattern around lines 96-100) to use the new controlled URL so the upload example is reliable and not dependent on external third-party hosting.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.snyk:
- Around line 2-4: Remove the blanket Snyk exclusion that skips scanning the
entire file path (lib/uploadcare/internal/authenticator.rb) and instead add
targeted Snyk ignore entries for specific vulnerability IDs affecting that file
using Snyk's "ignore" format: reference the exact vulnerability id(s), the path
"lib/uploadcare/internal/authenticator.rb", a concise "reason" and an "expires"
date so the ignore is scoped and time-boxed; ensure the .snyk exclude:block for
that file is deleted and replaced with those issue-scoped ignore entries.
In `@lib/uploadcare/api/upload/files.rb`:
- Around line 206-207: The code currently skips setting
params['check_URL_duplicates'] and params['save_URL_duplicates'] when the option
value is false because it uses truthiness checks; change the condition to test
presence in the options hash (e.g., use if options.key?(:check_URL_duplicates)
and if options.key?(:save_URL_duplicates)) so that explicit false values are
preserved and converted to strings via options[:...].to_s before assigning to
params['...'].
In `@lib/uploadcare/internal/error_handler.rb`:
- Around line 87-89: The current retry calculation only handles numeric seconds;
update the logic around retry_after and timeout to support both numeric and
HTTP-date formats: if retry_after matches a numeric value use retry_after.to_f,
otherwise attempt to parse it as an HTTP-date (using Time.httpdate or Time.parse
from 'time') and set timeout = parsed_time - Time.now; if parsing fails or the
computed timeout <= 0, fall back to 10.0. Ensure you require 'time' if needed
and keep the existing header lookup (headers['retry-after'] ||
headers['Retry-After']) and variable names retry_after and timeout so the change
is isolated and testable.
In `@lib/uploadcare/operations/multipart_upload.rb`:
- Around line 59-60: Validate and normalize the multipart upload parameters
before spawning workers: ensure threads is at least 1 (reject or clamp
0/negative values) and ensure part_size is a sensible positive integer (reject
or clamp <= 0 and enforce any minimum chunk size needed by config). Locate the
assignments to part_size and threads (variables part_size, threads) in multipart
upload logic (and the worker dispatch/loop around the code referenced in the
96-116 region) and add input validation that either raises a clear error or
coerces values to safe defaults before any worker creation or part dispatch
occurs.
---
Duplicate comments:
In `@lib/uploadcare/api/rest/document_conversions.rb`:
- Around line 55-60: The normalize_bool_param method is coercing unknown truthy
inputs to '1' which can invert intent; update normalize_bool_param so it only
converts explicitly recognized truthy (true, 1, '1', 'true') to '1' and
explicitly recognized falsy (false, 0, '0', 'false') to '0', but for any other
input return the original value unchanged (do not default to '1'); ensure the
matching accounts for string case-insensitivity where appropriate and keep the
function name normalize_bool_param as the location to change.
In `@lib/uploadcare/operations/multipart_upload.rb`:
- Around line 46-60: Compute the effective part_size before calling
Uploadcare::Result.unwrap(upload_client.files.multipart_start(...)) so the
server's planned parts match local reads: resolve part_size using
options.fetch(:part_size, config.multipart_chunk_size) and threads similarly
prior to invoking upload_client.files.multipart_start, then pass the resolved
part_size (and threads if needed) into the multipart_start call (or include it
in the options hash) and use those same symbols (part_size, threads,
start_response, presigned_urls, uuid) downstream so local chunking and the API
plan remain consistent.
- Around line 62-66: The per-part upload paths are not receiving the
request_options so presigned PUTs ignore configured timeouts/retries; update the
calls to upload_parts_parallel(prepared_file, presigned_urls, part_size,
threads, &block) and upload_parts_sequential(prepared_file, presigned_urls,
part_size, &block) to include request_options, update those method signatures
(upload_parts_parallel and upload_parts_sequential) to accept request_options,
and ensure the request_options are forwarded into the actual presigned PUT HTTP
calls (the code that performs the PUT for each part) so timeouts/retries are
applied; repeat the same propagation for the other per-part upload call sites
referenced (the additional upload_parts_* usages).
In `@lib/uploadcare/resources/file.rb`:
- Around line 8-13: The YARD examples still reference the old class-level
entrypoints Uploadcare::File.find and Uploadcare::File.upload; update those
examples to use the new client-first surface (e.g., replace
Uploadcare::File.find(uuid: "file-uuid") with client.files.find(uuid:
"file-uuid") and replace Uploadcare::File.upload(File.open("photo.jpg"), store:
true) with client.files.upload(File.open("photo.jpg"), store: true)), ensuring
the examples show obtaining a client instance (client =
Uploadcare::Client.new(...)) if needed.
- Around line 288-303: The cdn_url and uuid logic is returning REST/endpoints
and extracting UUIDs from arbitrary upstream URLs; update File#cdn_url to never
return `@url` (the REST file-info endpoint) and instead build the CDN URL only
from a validated UUID or from `@original_file_url` only when that URL begins with
config.cdn_base, and tighten File#uuid to extract the UUID only from
Uploadcare-owned URL shapes (e.g., require config.cdn_base or known Uploadcare
host pattern before applying the UUID regex to `@url` or `@original_file_url`);
update references to `@url`, `@original_file_url`, uuid, cdn_url and config.cdn_base
accordingly so local-copy payloads cannot synthesize bogus UUIDs and cdn_url
always yields a proper CDN address.
---
Nitpick comments:
In @.rubocop.yml:
- Around line 93-160: Replace the blanket disabling of many RSpec cops in
.rubocop.yml with a generated baseline: run RuboCop's auto-gen to create a
.rubocop_todo.yml (rubocop --auto-gen-config), remove the 23 RSpec cop "Enabled:
false" entries (e.g., RSpec/MultipleExpectations, RSpec/ExampleLength,
RSpec/DescribedClass, RSpec/AnyInstance, etc.) from .rubocop.yml, add an include
for the generated .rubocop_todo.yml so the baseline is applied, and then
re-enable high-signal cops incrementally by removing them from the todo file and
fixing offenses for symbols like RSpec/VerifiedDoubles, RSpec/ReceiveMessages,
RSpec/IteratedExpectation, etc.
In `@api_examples/support/example_helper.rb`:
- Line 12: The example hard-codes an external third-party URL in the
SAMPLE_VIDEO_URL constant which can break tests if that host moves or blocks
content; replace SAMPLE_VIDEO_URL with a repo-controlled or
Uploadcare-controlled stable URL (e.g., an asset committed to the repo or an
official Uploadcare demo URL) and update any other occurrences referenced in the
example block (the same pattern around lines 96-100) to use the new controlled
URL so the upload example is reliable and not dependent on external third-party
hosting.
In `@api_examples/support/run_upload_example.rb`:
- Around line 60-67: Extract the open/ensure/remove pattern into a new
ExampleHelper.with_fixture_file(name) helper that opens
ExampleHelper.fixture_path(name) in binary mode, yields the file handle to a
block, and guarantees closing the handle and calling
ExampleHelper.safe_delete_file(ExampleHelper.uploaded_uuid_from_base_response(response))
when a response exists; then replace run_base_upload to call
ExampleHelper.with_fixture_file('kitten.jpeg') { |handle|
ExampleHelper.unwrap(client.api.upload.files.direct(file: handle, store: true))
} so run_base_upload no longer manages closing or cleanup directly.
In `@lib/uploadcare/collections/paginated.rb`:
- Around line 102-112: The all method unboundedly iterates pages (using
collection.next_page and concatenating collection.resources) which can exhaust
memory; add an optional limit: parameter (default nil) to all, track a
fetched_count, and stop paging when fetched_count >= limit; when concatenating a
page, only append up to (limit - fetched_count) items (and return early), handle
non-positive limits by returning [] immediately, and update the method signature
and docstring to reflect the new limit behavior.
In `@lib/uploadcare/exception/throttle_error.rb`:
- Around line 7-8: The YARD docblock for ThrottleError#initialize is missing
documentation for the first parameter; update the comment above initialize to
include "@param message [String, nil] Message for the error" in addition to the
existing "@param timeout [Float]" so the doc matches the method signature
(initialize(message = nil, timeout: 10.0)).
In `@README.md`:
- Around line 447-450: The examples are inconsistent: video conversion example
uses job.result while document conversion uses job.fetch("result"), which can
confuse readers; update the README examples (referencing job.result,
job.fetch("result"), and the convert method) to include a brief inline comment
explaining that the video conversion client returns an object with a .result
accessor while the document conversion returns a hash accessed via fetch, or
alternatively change the video convert implementation to return a hash for
parity—prefer the README change: add one-line comments beside the video example
explaining the return shape and how to access the token so readers understand
the difference.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 46e78fa3-e4cf-4be7-a2d4-75faa29a336b
📒 Files selected for processing (26)
.rubocop.yml.snykGemfileMIGRATING_V5.mdREADME.mdapi_examples/support/example_helper.rbapi_examples/support/run_rest_example.rbapi_examples/support/run_upload_example.rbexamples/group_creation.rbexamples/upload_with_progress.rblib/uploadcare.rblib/uploadcare/api/rest/document_conversions.rblib/uploadcare/api/rest/files.rblib/uploadcare/api/upload/files.rblib/uploadcare/client.rblib/uploadcare/client/groups_accessor.rblib/uploadcare/collections/paginated.rblib/uploadcare/configuration.rblib/uploadcare/exception/throttle_error.rblib/uploadcare/internal/authenticator.rblib/uploadcare/internal/error_handler.rblib/uploadcare/internal/throttle_handler.rblib/uploadcare/operations/multipart_upload.rblib/uploadcare/resources/document_conversion.rblib/uploadcare/resources/file.rblib/uploadcare/resources/group.rb
🚧 Files skipped from review as they are similar to previous changes (5)
- examples/upload_with_progress.rb
- lib/uploadcare/configuration.rb
- lib/uploadcare/internal/throttle_handler.rb
- Gemfile
- examples/group_creation.rb
There was a problem hiding this comment.
Actionable comments posted: 5
♻️ Duplicate comments (1)
api_examples/support/example_helper.rb (1)
110-124:⚠️ Potential issue | 🟡 MinorMultipart helpers still leak the completed upload.
Line 118 stores the multipart upload permanently, Lines 122-123 only close the local fixture handle, and the uploaded file returned on Line 139 is never deleted. The callers in
api_examples/support/run_upload_example.rbLines 36-58 all complete the upload, so rerunning those examples will keep adding stored files to the demo account.Also applies to: 135-140
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api_examples/support/example_helper.rb` around lines 110 - 124, The helper with_multipart_session currently starts a stored multipart upload (store: true) and only closes the local file handle, leaking the uploaded file; after yielding, call the API to delete the created upload using the upload id from response (e.g., response['id'] or response.dig('id')) so completed uploads are cleaned up—invoke the corresponding delete method on client.api.upload.files (e.g., client.api.upload.files.delete) in the ensure block after file&.close, but only if a response with an id exists; keep existing unwrap and error handling semantics.
🧹 Nitpick comments (1)
lib/uploadcare/resources/file_metadata.rb (1)
100-133: Consider adding YARD documentation to class methods.The instance methods have comprehensive YARD documentation, but the class methods only have brief inline comments. For API consistency and better generated docs, consider adding
@param,@return, and@seetags to match the instance method documentation style.📝 Example YARD documentation
# --- Class methods --- - # Get all metadata for a file. + # Get all metadata for a file. + # + # `@param` uuid [String] File UUID + # `@param` client [Uploadcare::Client, nil] Optional client instance + # `@param` config [Uploadcare::Configuration] Configuration to use if client is nil + # `@param` request_options [Hash] Request options + # `@return` [Hash] All metadata key-value pairs def self.index(uuid:, client: nil, config: Uploadcare.configuration, request_options: {})🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/uploadcare/resources/file_metadata.rb` around lines 100 - 133, Add full YARD comments for the class methods index, show, update, and delete in the FileMetadata class to match the instance-method documentation style: include `@param` for each keyword argument (uuid, key, value when present, client, config, request_options), `@return` describing the Uploadcare::Result unwrap return type, and `@see` linking to the corresponding API endpoint or instance methods; place the docs immediately above each method definition (def self.index, def self.show, def self.update, def self.delete) and mirror wording/format used by the instance methods for consistency.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@api_examples/support/run_rest_example.rb`:
- Around line 31-34: The copied local file created by client.files.copy_to_local
inside ApiExamples::ExampleHelper.with_uploaded_file is not being removed,
causing accumulated demo files; after calling client.files.copy_to_local (the
call that uses source: file.uuid and options: { store: true }) add a cleanup
step to delete the copied file using the files API (e.g., call the SDK method
that removes a file by its uuid returned from copy_to_local) or ensure
ExampleHelper tears down the copied file UUID, and place this deletion in an
ensure/ensure-like block so the copied file is removed even on errors.
In `@api_examples/support/run_upload_example.rb`:
- Around line 45-53: The example currently uploads part 0 twice: it calls
ApiExamples::ExampleHelper.upload_multipart_part(file: file, session: session,
index: 0) and then calls
ApiExamples::ExampleHelper.finish_multipart_upload(file: file, session:
session), but finish_multipart_upload iterates and PUTs all parts again. Fix by
removing the explicit upload_multipart_part call in the with_multipart_session
branch and let finish_multipart_upload handle uploading all parts (or
alternatively modify finish_multipart_upload to accept a list of
already-uploaded part indices and skip them), referencing the methods
ApiExamples::ExampleHelper.with_multipart_session, upload_multipart_part, and
finish_multipart_upload to locate the code to change.
- Around line 28-31: The post_group.rb case currently uses
ApiExamples::ExampleHelper.with_uploaded_files which only cleans up files and
leaves the created group persistent; change this to use
ApiExamples::ExampleHelper.with_uploaded_group (which deletes groups in
ExampleHelper) or, after creating the group via
ApiExamples::ExampleHelper.unwrap(client.api.upload.groups.create(...)) capture
the returned group's uuid and call the API delete method (e.g.,
client.api.upload.groups.delete or the helper's delete logic) to remove the
group; update the 'post_group.rb' branch in run_upload_example.rb to either call
with_uploaded_group instead of with_uploaded_files or explicitly delete the
created group using the group uuid.
In `@lib/uploadcare.rb`:
- Around line 19-20: The configure method currently yields to the provided block
and only resets the cached `@client` after successful completion, so if the block
raises the stale client remains; modify configure (the method that calls yield
configuration) to ensure `@client` is invalidated regardless of errors by moving
the `@client` = nil into an ensure block (or otherwise setting `@client` = nil
before yielding) so that the cached client is always reset even when the block
raises.
In `@lib/uploadcare/resources/file.rb`:
- Around line 307-312: The cdn_url method can build an invalid URL when uuid is
nil; update cdn_url (method) to guard against missing uuid by returning nil (or
an appropriate nil/empty value) when neither `@url` nor `@original_file_url` are
valid CDN URLs and uuid is nil; specifically, after the existing
uploadcare_cdn_url?(`@url`) and uploadcare_cdn_url?(`@original_file_url`) checks,
add a guard like `return nil if uuid.nil?` before constructing
"#{config.cdn_base}#{uuid}/" so you only build the CDN URL when uuid is present.
---
Duplicate comments:
In `@api_examples/support/example_helper.rb`:
- Around line 110-124: The helper with_multipart_session currently starts a
stored multipart upload (store: true) and only closes the local file handle,
leaking the uploaded file; after yielding, call the API to delete the created
upload using the upload id from response (e.g., response['id'] or
response.dig('id')) so completed uploads are cleaned up—invoke the corresponding
delete method on client.api.upload.files (e.g., client.api.upload.files.delete)
in the ensure block after file&.close, but only if a response with an id exists;
keep existing unwrap and error handling semantics.
---
Nitpick comments:
In `@lib/uploadcare/resources/file_metadata.rb`:
- Around line 100-133: Add full YARD comments for the class methods index, show,
update, and delete in the FileMetadata class to match the instance-method
documentation style: include `@param` for each keyword argument (uuid, key, value
when present, client, config, request_options), `@return` describing the
Uploadcare::Result unwrap return type, and `@see` linking to the corresponding API
endpoint or instance methods; place the docs immediately above each method
definition (def self.index, def self.show, def self.update, def self.delete) and
mirror wording/format used by the instance methods for consistency.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 01af37ca-c1db-4413-bb3c-78bab86ec158
📒 Files selected for processing (8)
api_examples/support/example_helper.rbapi_examples/support/run_rest_example.rbapi_examples/support/run_upload_example.rblib/uploadcare.rblib/uploadcare/api/rest/document_conversions.rblib/uploadcare/internal/upload_io.rblib/uploadcare/resources/file.rblib/uploadcare/resources/file_metadata.rb
🚧 Files skipped from review as they are similar to previous changes (2)
- lib/uploadcare/api/rest/document_conversions.rb
- lib/uploadcare/internal/upload_io.rb
Summary
This PR continues the
gem-rewrite-3rewrite and cleans up the public surface, examples, and docs around the current architecture.The main goal is to make the gem read and behave like a clean framework-agnostic Uploadcare client:
Uploadcare::ClientArchitecture
Public shape
The public API is now organized around explicit clients and domain accessors:
client.filesclient.groupsclient.uploadsclient.projectclient.webhooksclient.file_metadataclient.addonsclient.conversionsclient.api.restclient.api.uploadThis keeps the normal usage path clean while preserving full access to the documented Uploadcare APIs.
Layering
The rewrite now has a clearer split between:
Convenience layer
uploadcare-railsRaw parity layer
client.api.restandclient.api.uploadUploadcare::ResultResources and operations
Operations::UploadRouterand multipart helpersConfiguration model
Configuration is client-scoped instead of being implicitly global:
Uploadcare.configurationremains the default templateUploadcare.clientremains available for convenienceUploadcare::Client.new(...)owns its own configconfig.with(...)supports safe derived configsThis is important for using multiple Uploadcare accounts in one process, especially in Rails apps and future
uploadcare-railsintegration work.What changed
Canonical API examples
The endpoint examples under
api_examples/were normalized so there is one canonical script per documented endpoint.api_examples/support/api_examples/README.mdas the endpoint inventoryLive API verification
The canonical examples were actually run against a real Uploadcare demo account from environment credentials.
Verified areas include:
Workflow examples
The workflow demos under
examples/were updated to use the current public API instead of removedUploader/ old client flows.README and docs cleanup
README.mdaround the current gem shape instead of older rewrite terminologyMIGRATING_V5.mdto be an actual v4-to-v5 map instead of a stale draftdocs/REWRITE_BLUEPRINT.mdspec/TESTING.mdto reflect the current architectureYARD cleanup
attr_accessor(*ATTRIBUTES)declarations in resource classesVerification
Automated
mise exec -- bundle exec rspecmise exec -- bundle exec yard docReal examples run against demo account
api_examples/rest_api/*.rbapi_examples/upload_api/*.rbmise exec -- ruby examples/simple_upload.rb spec/fixtures/kitten.jpegmise exec -- ruby examples/batch_upload.rb spec/fixtures/kitten.jpeg spec/fixtures/another_kitten.jpegmise exec -- ruby examples/group_creation.rb spec/fixtures/kitten.jpeg spec/fixtures/another_kitten.jpegmise exec -- ruby examples/url_upload.rb https://upload.wikimedia.org/wikipedia/commons/3/3f/JPEG_example_flower.jpgmise exec -- ruby examples/large_file_upload.rb spec/fixtures/big.jpeg 2mise exec -- ruby examples/upload_with_progress.rb spec/fixtures/big.jpegStale API scan
I also scanned the live docs and examples for old API names (
Uploadcare::Uploader, old client names,FileList,GroupList, etc.). Outside of the intentional before/after examples inMIGRATING_V5.mdand historical entries inCHANGELOG.md, those stale references were cleaned up.Notes
There are still historical older API names in
CHANGELOG.md, which is expected because it describes prior versions. The active README, migration guide, examples, testing guide, and generated YARD docs are aligned with the current rewrite.Summary by CodeRabbit
New Features
Documentation
Chores