Feat/testing the full image before publishing (issue #82)#12
Feat/testing the full image before publishing (issue #82)#12TatjanaTrajkovic wants to merge 30 commits intomainfrom
Conversation
* chore: Update POM to Java 25 and rename artifactId/groupId * update folder name from example to juv25d --------- Co-authored-by: WHITEROSE <firasmoussa60@gmail.com>
* http parser * Bunny fixes. (only using input stream to recieve requests) * Bunny review improvements * Improved http parser ReadLine helper method to eliminate dependency on mark() and reset(). Implemented handleClient() using socket as a try-with-resources to avoid memory leakage in case of exception thrown by httpparser-methods. * NumberFormatException fix on line 53 -> 60 * chore: Update POM to Java 25 and rename artifactId/groupId (#11) * chore: Update POM to Java 25 and rename artifactId/groupId * update folder name from example to juv25d --------- Co-authored-by: WHITEROSE <firasmoussa60@gmail.com> * resolve conflicts --------- Co-authored-by: Kristina <kristina0x7@gmail.com> Co-authored-by: WHITEROSE <firasmoussa60@gmail.com>
* Add ServerLogging.java as separate class for logging. Implement said class in SocketServer.java to return logging information upon opening socket and user connecting to server. * Update ServerLogging.java to include a static initializer block and an empty utility class to prevent instantiation. * Update ServerLogging.java to reference same class in getLogger argument. * Update ServerLogging.java to check if handler has already been instantiated and allow for log level to be set by args in JVM (default level 'INFO' if no args provided). * normalize logging statements to be consistent * remove unused imports * Update SockerServer.java to properly log server socket errors. --------- Co-authored-by: Mats Rönnqvist <mats.f.ronnqvist@gmail.com> Co-authored-by: WHITEROSE <firasmoussa60@gmail.com>
* update POM with pitest * add junit plugin dependency
Rebased 4 commits in this PR.
…#27) * Fix PiTest by defining argLine and removing invalid Mockito javaagent * self close argline
…edicated ConnectionHandler. (#28) * Rename SocketServer to Server Move HTTP request handling logic to a dedicated ConnectionHandler. * Convert createSocket to an instance method named start(). * Replace console prints with Logger in ConnectionHandler * Fixed typo in ConnectionHandler * refactor Server and ConnectionHandler: - Inject Logger inside of constructor - Inject handlerFactory into the Server that handles creation of a new ConnectionHandler on each request - Remove HttpParser from Server as it is not handling the parsing of a request * accept ConnectionHandlerFactory and not a specific implementation * normalize handlerFactory name --------- Co-authored-by: WHITEROSE <firasmoussa60@gmail.com>
…tests. (#34) * Add testing for ServerLogging.java. Configure ServerLogging.java to improve its testability. * Update logger_shouldNotAddDuplicateHandlers test to properly test non-inclusion of duplicates. * Update ServerLogging.java to guard against invalid level string entries in configure method (logger.parse => setLevel).
* http parser * Bunny fixes. (only using input stream to recieve requests) * Bunny review improvements * Improved http parser ReadLine helper method to eliminate dependency on mark() and reset(). Implemented handleClient() using socket as a try-with-resources to avoid memory leakage in case of exception thrown by httpparser-methods. * NumberFormatException fix on line 53 -> 60 * Added foundation for Filters and Plugins. Added FilterChain to use created filters, and a Pipeline class to handle the workflow (Client → Filters → Plugin → Response → Client). Modified SocketServer handleClient() to use FilterChain. Added example code in App.java for Pipeline usage. TODO: Initialize HttpResponse class * Fixing build fail * add init and destroy to filter interface * add methods to pipeline class * Add servlet-style filter pipeline with lifecycle support * add documentation about temporary server impl * test: verify filters execute in order and plugin is called last * test: add coverage for filter blocking, lifecycle init, and empty pipeline * add loggingfiltertest * fix: construct HttpResponse with required arguments * fix: construct HttpResponse with required arguments * Update FilterChainImpl tests to use full HttpResponse constructor to ensure compatibility with future implementations * remove duplicated class --------- Co-authored-by: Kristina M <kristina0x7@gmail.com> Co-authored-by: WHITEROSE <firasmoussa60@gmail.com>
…39) * feat: make HttpResponse mutable and implement NotFoundPlugin default - Updated HttpResponse to be mutable to allow filters and plugins to modify responses. - Implemented NotFoundPlugin as a default fallback for the Pipeline. - Added null safety check in Pipeline.setPlugin. - Added unit tests for Pipeline default behavior and NotFoundPlugin. * ran mvn spotless:apply to fix violations * Rabbit comment fixes * Review fixes * Final buggfixes for this PR (again)
* Implement static file handler with security and tests Core Implementation: - Add StaticFileHandler for serving files from /resources/static/ - Add MimeTypeResolver for Content-Type detection - Add security validation to prevent path traversal attacks Testing: - Add MimeTypeResolverTest (15 test cases) - Add StaticFileHandlerTest (20+ test cases) - All tests passing Example Files: - Add index.html demo page with gradient styling - Add styles.css for professional styling - Add app.js for JavaScript functionality demo Note: Integration with Server/ConnectionHandler will be added after PR #28 merges to avoid conflicts. Foundation work for #18 * Introduce ADR structure and first ADR - Add ADR README explaining the ADR process for the team - Add TEMPLATE for writing future ADRs - Add ADR-001 documenting static file serving architecture Closes #16 * Rename SocketServer to Server Move HTTP request handling logic to a dedicated ConnectionHandler. (#28) * Rename SocketServer to Server Move HTTP request handling logic to a dedicated ConnectionHandler. * Convert createSocket to an instance method named start(). * Replace console prints with Logger in ConnectionHandler * Fixed typo in ConnectionHandler * refactor Server and ConnectionHandler: - Inject Logger inside of constructor - Inject handlerFactory into the Server that handles creation of a new ConnectionHandler on each request - Remove HttpParser from Server as it is not handling the parsing of a request * accept ConnectionHandlerFactory and not a specific implementation * normalize handlerFactory name --------- Co-authored-by: WHITEROSE <firasmoussa60@gmail.com> * Add testing for ServerLogging.java. Configure ServerLogging.java for tests. (#34) * Add testing for ServerLogging.java. Configure ServerLogging.java to improve its testability. * Update logger_shouldNotAddDuplicateHandlers test to properly test non-inclusion of duplicates. * Update ServerLogging.java to guard against invalid level string entries in configure method (logger.parse => setLevel). * feature/FilterPlugin (#17) * http parser * Bunny fixes. (only using input stream to recieve requests) * Bunny review improvements * Improved http parser ReadLine helper method to eliminate dependency on mark() and reset(). Implemented handleClient() using socket as a try-with-resources to avoid memory leakage in case of exception thrown by httpparser-methods. * NumberFormatException fix on line 53 -> 60 * Added foundation for Filters and Plugins. Added FilterChain to use created filters, and a Pipeline class to handle the workflow (Client → Filters → Plugin → Response → Client). Modified SocketServer handleClient() to use FilterChain. Added example code in App.java for Pipeline usage. TODO: Initialize HttpResponse class * Fixing build fail * add init and destroy to filter interface * add methods to pipeline class * Add servlet-style filter pipeline with lifecycle support * add documentation about temporary server impl * test: verify filters execute in order and plugin is called last * test: add coverage for filter blocking, lifecycle init, and empty pipeline * add loggingfiltertest * fix: construct HttpResponse with required arguments * fix: construct HttpResponse with required arguments * Update FilterChainImpl tests to use full HttpResponse constructor to ensure compatibility with future implementations * remove duplicated class --------- Co-authored-by: Kristina M <kristina0x7@gmail.com> Co-authored-by: WHITEROSE <firasmoussa60@gmail.com> * Add charset=utf-8 to text-based Content-Type headers * Integrate StaticFileHandler with Pipeline architecture * Refactor to use immutable fields and clean up unused setters; update with usage notes for future integration. * Enhance StaticFilesPlugin to properly handle headers with map iteration; simplify and clarify class documentation. * Update tests to verify Content-Type headers include charset=utf-8 * connect StaticFilesPlugin --------- Co-authored-by: johanbriger <johanbriger@gmail.com> Co-authored-by: WHITEROSE <firasmoussa60@gmail.com> Co-authored-by: Mats Rönnqvist <203552386+bamsemats@users.noreply.github.com> Co-authored-by: Linus Westling <141355850+LinusWestling@users.noreply.github.com> Co-authored-by: Kristina M <kristina0x7@gmail.com>
* bump pom version for release * revert pom version * fix: docker release workflow to run on tag push * fix: update tag ref in docker release workflow * fix: update temurine version in DockerFile to match pom version 25 * fix: configure maven-jar-plugin to find the main class for manifest
* Add test for valid GET request and fix key case-sensitivity in header parsing Co-authored-by: Mattias Hagström <mattiashagstrommusic@gmail.com> * Verify empty request throws exception Co-authored-by: Mattias Hagström <mattiashagstrommusic@gmail.com> * Verify malformed request throws exception Co-authored-by: Mattias Hagström <mattiashagstrommusic@gmail.com> * Verify valid query string Co-authored-by: Mattias Hagström <mattiashagstrommusic@gmail.com> * Verify null and empty requests throws exception Co-authored-by: Mattias Hagström <mattiashagstrommusic@gmail.com> * Verify malformed header line in get request throws exception Co-authored-by: Mattias Hagström <mattiashagstrommusic@gmail.com> * Verify valid post request Co-authored-by: Mattias Hagström <mattiashagstrommusic@gmail.com> * Verify invalid or negative content length throws exception Co-authored-by: Mattias Hagström <mattiashagstrommusic@gmail.com> * Add test documentation and improve test readability for HttpParser Co-authored-by: Mattias Hagström <mattiashagstrommusic@gmail.com> * Update tests and ensure UTF-8 handling; use case-insensitive headers in HttpParser Co-authored-by: Mattias Hagström <mattiashagstrommusic@gmail.com> * Refactor: Introduce constant for "Content-Length" header in HttpParser --------- Co-authored-by: Mattias Hagström <mattiashagstrommusic@gmail.com>
…ons (#52) * add application properties and a ConfigLoader to load set configurations * integrate ConfigLoader into ServerLogging to read logging.level * inject Server port from ConfigLoader into constructor
* test: add global filter execution test * feat: implement global filter support with ordering * test: add route specific filter test * feat: add route specific filter matching logic * update main comments with upcoming filter impl * chore: run spotless:apply to fix unused imports and formatting * bunny-fix * refactor: optimize route filter lookup with Map structure * small fix * fix: make global filter cache thread-safe and immutable * implement the correct plugins so server works as intended and move comments to a doc --------- Co-authored-by: WHITEROSE <firasmoussa60@gmail.com>
* Add IP filter skeleton with placeholder IP * Add IP filter and support client IP in HttpRequest * Add remote IP to HttpRequest in ConnectionHandler * Configured IP filter whitelist for localhost * Enable IP filter with open access for development * Update tests for HttpRequest remote IP failed * Document IP filter configuration in App * Resolved merge conflict in App pipeline configuration
…eadme.md (#63) * Update index.html to include a nav bar at the top, styles.css for adjustments to frontend aspects, add readme.html to host readme.md, basic javascript for loading markdown language content. * Update frontend logic to avoid errors loading data from readme.md. Smoothed out transitions between sites from navigation. * Update minor bugs and duplicate css code lines. * Update minor bugs and duplicate css code lines. * Update for sanitization of readme.md through DOMPurify. Added dependency min.js and its callback in index.html and readme.html. Removed superfluous markdown.js.
* Add Maven Shade Plugin for building a fat JAR; update README with detailed project overview and usage instructions * Update project version to 1.0.2-beta in pom.xml * Update README to reflect fat JAR packaging using Maven Shade Plugin * fix: specify final jar name so the DockerBuild picks the correct one, rollback pom version * rollback POM version to original * update README with dynamic tag and correct port number * update README with the dynamic specified jar file generated from maven-shaded --------- Co-authored-by: WHITEROSE <firasmoussa60@gmail.com>
* Refactor plugin handling by introducing `Router` abstraction; added `SimpleRouter` implementation. Updated pipeline and tests to support new routing system. * Refactor plugin handling by introducing `Router` abstraction; added `SimpleRouter` implementation. Updated pipeline and tests to support new routing system. * Improve wildcard routing in `SimpleRouter` with longest-prefix match logic; add test coverage for specific and wildcard match scenarios.
📝 WalkthroughWalkthroughThis PR introduces a complete Java HTTP server implementation with request/response pipeline architecture, filter-based request processing, routing to plugins, static file serving with security validation, YAML configuration, CI/CD workflows, comprehensive testing, and detailed documentation including ADRs. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Server
participant Handler as ConnectionHandler
participant Parser as HttpParser
participant Pipeline
participant Filter as Filter Chain
participant Router
participant Plugin
participant ResponseWriter
Client->>Server: TCP Connection
Server->>Handler: create(socket, pipeline)
Handler->>Parser: parse(inputStream)
Parser-->>Handler: HttpRequest
Handler->>Pipeline: createChain(request)
Pipeline->>Filter: doFilter(request, response)
loop For each global + route filter
Filter->>Filter: execute filter logic
alt Filter blocks
Filter-->>Filter: return early
else Filter passes
Filter->>Filter: next filter
end
end
Filter->>Router: resolve(request)
Router-->>Filter: Plugin
Filter->>Plugin: handle(request, response)
Plugin-->>Filter: response modified
Filter-->>Handler: chain complete
Handler->>ResponseWriter: write(outputStream, response)
ResponseWriter-->>Client: HTTP/1.1 response
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Tip Issue Planner is now in beta. Read the docs and try it out! Share your feedback on Discord. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 16
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
🟡 Minor comments (16)
src/main/java/org/juv25d/http/HttpResponseWriter.java-27-43 (1)
27-43:⚠️ Potential issue | 🟡 Minor
NullPointerExceptionifresponse.headers()returnsnull.
HttpResponsehas a no-arg constructor that leaves itsheadersfield uninitialised. If a plugin constructsnew HttpResponse()without setting headers,response.headers()returnsnullandheaders.entrySet()inwriteHeaderswill throw NPE at runtime.Guard against this:
🛡️ Proposed fix
private static void writeHeaders( OutputStream out, Map<String, String> headers, byte[] body ) throws IOException { + if (headers != null) { for (Map.Entry<String, String> header : headers.entrySet()) { if (!header.getKey().equalsIgnoreCase("Content-Length")) { String line = header.getKey() + ": " + header.getValue() + "\r\n"; out.write(line.getBytes(StandardCharsets.UTF_8)); } } + }Alternatively, fix the root cause in
HttpResponse()no-arg constructor by initialisingheaders = new LinkedHashMap<>().🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/java/org/juv25d/http/HttpResponseWriter.java` around lines 27 - 43, The NPE occurs because HttpResponse's no-arg constructor can leave the headers field null causing writeHeaders(OutputStream, Map<String,String>, byte[]) to call headers.entrySet(); fix it by initializing headers = new LinkedHashMap<>() in HttpResponse's no-arg constructor so response.headers() never returns null, and also make writeHeaders defensive by treating a null headers parameter as empty (e.g., skip iterating if headers == null) to avoid runtime errors if any other code passes null.src/main/resources/static/readme.html-18-18 (1)
18-18:⚠️ Potential issue | 🟡 MinorMissing semicolons on HTML numeric character references.
🔍and🔎should be🔍and🔎respectively. While most browsers recover gracefully, omitting the trailing semicolon is technically invalid HTML and can cause parsing ambiguity.🐛 Proposed fix
- <h1>🔍 ReadMe 🔎</h1> + <h1>🔍 ReadMe 🔎</h1>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/resources/static/readme.html` at line 18, The numeric character references in the heading are missing their trailing semicolons (🔍 and 🔎); update them to include semicolons (🔍 and 🔎) in the <h1> element so the entities are valid HTML and unambiguous when parsed.src/main/resources/static/readme.html-36-38 (1)
36-38:⚠️ Potential issue | 🟡 MinorInconsistent script
srcpath —purify.min.jsuses a relative URL.Lines 36 and 38 use absolute paths (
/js/marked.min.js,/js/app.js), but line 37 uses a relative path (js/purify.min.js). This will break if the page is ever served at a nested URL. Use a consistent absolute path:🐛 Proposed fix
<script src="/js/marked.min.js"></script> -<script src="js/purify.min.js"></script> +<script src="/js/purify.min.js"></script> <script src="/js/app.js"></script>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/resources/static/readme.html` around lines 36 - 38, The script tag for purify uses a relative path which will break on nested routes; update the src attribute for purify.min.js in the readme.html <script> element to use the absolute path "/js/purify.min.js" so it matches the other includes (marked.min.js and app.js) and keeps path resolution consistent across routes.src/main/java/org/juv25d/http/HttpParser.java-50-62 (1)
50-62:⚠️ Potential issue | 🟡 Minor
readNBytesmay return fewer bytes thanlengthon a premature stream close.
InputStream.readNBytes(int len)can return a shorter array if the stream reaches EOF beforelenbytes are read. The code doesn't verify thatbody.length == length, so a truncated body would be silently accepted. Consider adding a check:🛡️ Proposed fix
body = in.readNBytes(length); + if (body.length < length) { + throw new IOException("Unexpected end of stream: expected " + length + " bytes, got " + body.length); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/java/org/juv25d/http/HttpParser.java` around lines 50 - 62, The current HttpParser code uses in.readNBytes(length) to populate body from CONTENT_LENGTH but does not verify the returned byte array size, so a premature stream close can produce a truncated body; fix this in the block that reads into body by reading exactly length bytes (either by looping on InputStream.read until total bytesRead == length or by using DataInputStream.readFully) and if fewer bytes are returned throw an IOException indicating premature EOF/short read; ensure the exception message references the expected length and actual bytes read and keep the same CONTENT_LENGTH/header handling and variable names (body, length, in) so the change is localized.src/main/resources/static/index.html-46-48 (1)
46-48:⚠️ Potential issue | 🟡 MinorSame inconsistent relative path for
purify.min.js.Same issue as in
readme.html— line 47 usesjs/purify.min.js(relative) while the other scripts use absolute paths. Fix for consistency:🐛 Proposed fix
<script src="/js/marked.min.js"></script> -<script src="js/purify.min.js"></script> +<script src="/js/purify.min.js"></script> <script src="/js/app.js"></script>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/resources/static/index.html` around lines 46 - 48, Replace the inconsistent relative script path "js/purify.min.js" with the absolute path "/js/purify.min.js" so all <script> tags match (e.g., alongside "/js/marked.min.js" and "/js/app.js"); update the src attribute in the script element that currently reads "js/purify.min.js" to "/js/purify.min.js" to ensure consistent asset resolution.src/main/resources/static/js/app.js-21-30 (1)
21-30:⚠️ Potential issue | 🟡 MinorRoot path
"/"is not mapped in the routes table.If users visit the root URL (
/),route()won't find a handler. If the server servesindex.htmlfor/, this is likely fine (the noop handler would be appropriate). But if/readme.htmlcontent is ever served at a different path,initReadmewon't fire. Consider adding"/"as an alias:const routes = { + "/": () => {}, "/index.html": () => {}, "/readme.html": initReadme, };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/resources/static/js/app.js` around lines 21 - 30, The routes table is missing a mapping for the root path "/", so calling route("/") won't invoke any handler; update the routes object (symbol: routes) to include "/" as an alias for "/index.html" (i.e., map "/" to the same noop handler or to initReadme where appropriate) so route(path) will find a handler for root; modify routes so that "/" either points to the existing empty function for "/index.html" or to initReadme if you intend root to initialize the readme, and ensure route(path) logic remains unchanged.src/main/java/org/juv25d/handler/StaticFileHandler.java-50-50 (1)
50-50:⚠️ Potential issue | 🟡 MinorUser-controlled
path/resourcePathconcatenated into log messages — log injection risk.An attacker can embed newline characters in the path to forge log entries. Consider using parameterized logging or sanitizing the path before logging.
Also applies to: 57-57, 74-74, 78-78
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/java/org/juv25d/handler/StaticFileHandler.java` at line 50, The log message in StaticFileHandler (calls to logger.warning) currently concatenates user-controlled variables like path and resourcePath which permits log injection; update the logging to either use parameterized logging (pass the path as a parameter instead of string concatenation) or sanitize/escape the values (strip/encode newline, carriage return, and control characters) before logging in the methods that call logger.warning (references: StaticFileHandler, variables path and resourcePath, and the logger.warning calls). Ensure every logger.warning occurrence that logs user input is changed accordingly so untrusted input cannot inject new log lines.docs/adr/ADR-001-static-file-serving-architecture.md-238-238 (1)
238-238:⚠️ Potential issue | 🟡 MinorReplace placeholder issue URL.
The reference at line 238 points to
https://github.com/your-repo/issues/18, which is a template placeholder. Update it to the actual repository URL.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/adr/ADR-001-static-file-serving-architecture.md` at line 238, The markdown link pointing to the placeholder issue URL "[Issue `#18`: GET handling for static files](https://github.com/your-repo/issues/18)" should be updated to the real repository issue URL; locate the string in ADR-001 static file serving doc (the markdown link text "Issue `#18`: GET handling for static files" and its URL) and replace "https://github.com/your-repo/issues/18" with the actual repository issue URL (e.g., "https://github.com/<org-or-user>/<repo>/issues/18") so the link resolves to the correct issue.docs/adr/ADR-001-static-file-serving-architecture.md-46-46 (1)
46-46:⚠️ Potential issue | 🟡 MinorAdd language identifiers to fenced code blocks (MD040).
Three code fences at lines 46, 65, and 224 are missing language specifiers, causing markdownlint MD040 warnings. Use
textfor plain-text/pseudo-code blocks andjavafor the Java snippet at line 214.🔧 Proposed fix (representative example for each block)
-``` +```text src/main/resources/-``` +```text StaticFileHandler-``` +```text src/main/resources/static/Also applies to: 65-65, 224-224
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/adr/ADR-001-static-file-serving-architecture.md` at line 46, Update the three fenced code blocks that lack language specifiers by adding appropriate languages: mark the plain-text/pseudo-code fences containing "src/main/resources/", "StaticFileHandler", and "src/main/resources/static/" with ```text, and mark the Java snippet (the block referencing the StaticFileHandler implementation) with ```java so markdownlint MD040 warnings are resolved; locate the fences by searching for those exact snippets and add the language identifiers immediately after the opening backticks.src/main/java/org/juv25d/plugin/NotFoundPlugin.java-13-13 (1)
13-13:⚠️ Potential issue | 🟡 MinorUse explicit charset in
getBytes().
getBytes()without a charset argument uses the platform default encoding. SpecifyStandardCharsets.UTF_8to guarantee consistent behavior across all environments.🔧 Proposed fix
+import java.nio.charset.StandardCharsets; ... - res.setBody("404 - Resource Not Found".getBytes()); + res.setBody("404 - Resource Not Found".getBytes(StandardCharsets.UTF_8));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/java/org/juv25d/plugin/NotFoundPlugin.java` at line 13, The call in NotFoundPlugin that sets the response body using "404 - Resource Not Found".getBytes() should specify a charset to avoid relying on the platform default; update the code in the NotFoundPlugin (where res.setBody(...) is called) to use StandardCharsets.UTF_8 (and add an import for java.nio.charset.StandardCharsets if missing) so the byte encoding is explicit and consistent across environments.src/test/java/org/juv25d/logging/ServerLoggingTest.java-123-141 (1)
123-141:⚠️ Potential issue | 🟡 MinorTest ignores the
LOG_LEVELenvironment variable in the fallback chain.
ServerLogging.configureresolves the level asSystem.getProperty("log.level", System.getenv().getOrDefault("LOG_LEVEL", configLoader.getLogLevel())). AfterSystem.clearProperty("log.level"), ifLOG_LEVELis set in the environment (common in CI), the level will come from the env var rather thanconfigLoader.getLogLevel(), causing the assertion to fail spuriously.Add a guard to skip (or adjust the expected value) when
LOG_LEVELis set:org.junit.jupiter.api.Assumptions.assumeTrue( System.getenv("LOG_LEVEL") == null, "Skipping: LOG_LEVEL env var overrides application-properties fallback" );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/test/java/org/juv25d/logging/ServerLoggingTest.java` around lines 123 - 141, The test fails when the LOG_LEVEL env var is present because ServerLogging.configure uses System.getenv("LOG_LEVEL") in its fallback; update the test logger_shouldUseLogLevelFromApplicationProperties to guard for that by using a JUnit assumption that System.getenv("LOG_LEVEL") is null before asserting the expected level from ConfigLoader.getInstance().getLogLevel(); specifically add an assumption at the start of the test to skip when LOG_LEVEL is set so ServerLogging.configure resolution (System.getProperty/.../System.getenv(...)) won't be influenced by the environment.src/test/java/org/juv25d/logging/ServerLoggingTest.java-84-90 (1)
84-90:⚠️ Potential issue | 🟡 Minor
logger_shouldHaveInfoLevelByDefaultdepends on static initialization state, not a verified default.
setUp()only removes handlers — it does not reset the log level. The assertionassertEquals(Level.INFO, logger.getLevel())passes only ifapplication-properties.ymlalready specifiesINFO. Any other configured default (e.g.,FINE) causes a false failure. Either:
- Read the expected level from
ConfigLoader.getInstance().getLogLevel()and assert that, or- Explicitly call
ServerLogging.configure(logger)after clearing the property and assert the resulting level matches the config.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/test/java/org/juv25d/logging/ServerLoggingTest.java` around lines 84 - 90, The test logger_shouldHaveInfoLevelByDefault relies on existing static state; fix it by making the expected level explicit: either fetch the expected level from ConfigLoader.getInstance().getLogLevel() and assert logger.getLevel() equals that, or reset and reconfigure the logger by calling ServerLogging.configure(logger) after your setUp() cleanup and then assert logger.getLevel() equals the config-derived level; update the test to use ServerLogging.getLogger(), ConfigLoader.getInstance().getLogLevel(), and/or ServerLogging.configure(...) so the assertion no longer depends on external static initialization.docs/notes/pipeline-usage.md-11-11 (1)
11-11:⚠️ Potential issue | 🟡 Minor
new IpFilter()in the example doesn't match the actual constructor.
IpFilterhas no no-arg constructor; its only constructor requires(Set<String> whitelist, Set<String> blacklist). Anyone following this example will get a compile error.📝 Suggested fix
- // pipeline.addGlobalFilter(new IpFilter(), 300); + // pipeline.addGlobalFilter(new IpFilter(Set.of(), Set.of()), 300);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/notes/pipeline-usage.md` at line 11, The example uses new IpFilter() which doesn't exist; update the pipeline example to construct IpFilter with the required parameters by creating and passing the whitelist and blacklist sets (e.g., Set<String> whitelist, Set<String> blacklist) into the IpFilter constructor before calling pipeline.addGlobalFilter; reference the IpFilter constructor and the call site pipeline.addGlobalFilter(...) so readers see the correct instantiation and usage.src/test/java/org/juv25d/http/HttpParserTest.java-153-153 (1)
153-153:⚠️ Potential issue | 🟡 Minor
body.getBytes()uses the platform default charset, inconsistent with how the request was built.Line 140 computes
Content-Lengthusingbody.getBytes(StandardCharsets.UTF_8), but line 153 asserts the parsed body usingbody.getBytes()(platform default). On non-UTF-8 platforms this comparison can fail or silently pass on wrong bytes.🛠 Proposed fix
- assertThat(result.body()).isEqualTo(body.getBytes()); + assertThat(result.body()).isEqualTo(body.getBytes(StandardCharsets.UTF_8));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/test/java/org/juv25d/http/HttpParserTest.java` at line 153, The assertion uses platform-default bytes which can differ from how the request was constructed; update the assertion in HttpParserTest so it compares result.body() to body.getBytes(StandardCharsets.UTF_8) (matching the Content-Length computation) to ensure consistent UTF-8 encoding; locate the test method that calls result.body() and replace the platform-default getBytes() call with the explicit StandardCharsets.UTF_8 version.src/test/java/org/juv25d/http/HttpParserTest.java-117-127 (1)
117-127:⚠️ Potential issue | 🟡 Minor
parseValidQueryStringrelies on EOF as the end-of-headers sentinel.The request
"GET /search?q=java HTTP/1.1\r\n"has no trailing blank line (\r\n) to explicitly signal the end of headers. The test passes only becauseByteArrayInputStreamreturns EOF and the parser treats it as end-of-headers. If the parser is later hardened to require an explicit blank line (which is spec-compliant), this test will silently break.🛠 Proposed fix
- String request = "GET /search?q=java HTTP/1.1\r\n"; + String request = "GET /search?q=java HTTP/1.1\r\n" + + "\r\n";🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/test/java/org/juv25d/http/HttpParserTest.java` around lines 117 - 127, The test parseValidQueryString uses an HTTP request string without the terminating blank line, relying on EOF to signal end-of-headers; update the test so the request passed into HttpParser.parse (via createInputStream) includes the required CRLF CRLF after the request line (i.e., append "\r\n" to create the empty header terminator) so it explicitly matches the HTTP end-of-headers sentinel and won’t break if the parser is changed to require an explicit blank line.README.md-310-314 (1)
310-314:⚠️ Potential issue | 🟡 MinorStale
setPlugin()API reference —Pipelinenow usessetRouter().The "Registering a Plugin" example documents
pipeline.setPlugin(new HelloPlugin()), but this method no longer exists inPipeline. The current API issetRouter(Router). Readers following this example will encounter a compilation error.🛠 Proposed fix
-pipeline.setPlugin(new HelloPlugin()); +pipeline.setRouter(new SimpleRouter()); // register plugin routes on the Router🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 310 - 314, The README example uses the removed Pipeline.setPlugin(...) API; update the "Registering a Plugin" section to use the current API by replacing Pipeline.setPlugin(new HelloPlugin()) with Pipeline.setRouter(...) and pass a Router instance that wraps or registers HelloPlugin (e.g., create or obtain a Router that delegates to HelloPlugin), ensuring references to Pipeline, setPlugin, setRouter, HelloPlugin and Router are corrected so the example compiles against the current API.
| - name: Run tests | ||
| run: ./mvnw -B test |
There was a problem hiding this comment.
Integration tests are never executed in CI — add a verify step.
The pipeline runs ./mvnw -B test, which only invokes the Surefire unit-test phase. AppIT (the PR's core deliverable) is bound to the Failsafe integration-test phase and only runs during ./mvnw verify. As-is, the entire integration test suite is silently skipped in CI.
🔧 Proposed fix
- - name: Run tests
- run: ./mvnw -B test
+ - name: Run unit tests
+ run: ./mvnw -B test
+
+ - name: Run integration tests
+ run: ./mvnw -B verify -DskipTests -Dfailsafe.useFile=falseOr, collapse both into a single verify invocation (Surefire runs automatically as part of verify):
- - name: Run tests
- run: ./mvnw -B test
+ - name: Build and verify
+ run: ./mvnw -B verify -Dfailsafe.useFile=false📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: Run tests | |
| run: ./mvnw -B test | |
| - name: Run unit tests | |
| run: ./mvnw -B test | |
| - name: Run integration tests | |
| run: ./mvnw -B verify -DskipTests -Dfailsafe.useFile=false |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/ci.yml around lines 23 - 24, The CI job step named "Run
tests" currently executes "./mvnw -B test" which only runs Surefire/unit tests
and skips Failsafe-bound integration tests (e.g., AppIT); update the step to run
"./mvnw -B verify" instead (or add a separate step invoking "./mvnw -B verify")
so the integration-test and verify phases (Failsafe) are executed in CI,
ensuring AppIT runs; modify the step that currently uses "./mvnw -B test"
accordingly.
| - name: Build and push Docker image | ||
| uses: docker/build-push-action@v6 | ||
| with: | ||
| context: . | ||
| push: true | ||
| platforms: linux/amd64,linux/arm64 | ||
| tags: ${{ steps.meta.outputs.tags }} | ||
| labels: ${{ steps.meta.outputs.labels }} |
There was a problem hiding this comment.
Integration tests are never executed before the image is published.
The stated goal of this PR is "testing the full image before publishing," yet the release workflow contains no step that runs mvn verify (or mvnw verify) to execute the AppIT Testcontainers suite. As written, the Docker image is built and pushed with -DskipTests (from the Dockerfile's build stage) and zero post-build verification.
At a minimum, a step like the following should precede the push:
- name: Run integration tests
run: ./mvnw verify -Dfailsafe.useFile=falseAlternatively, gate the tag-triggered release on a passing CI run that already executes AppIT.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/docker-release.yml around lines 39 - 46, The workflow
currently builds and pushes the Docker image in the "Build and push Docker
image" step (docker/build-push-action@v6) without running integration tests; add
a step immediately before the push that runs the Maven integration suite (e.g.,
name it "Run integration tests" and run ./mvnw verify -Dfailsafe.useFile=false)
so AppIT/Testcontainers execute against the built image (or run the verify
against the project before tagging/pushing), ensuring the push only occurs after
mvnw verify succeeds.
| FROM eclipse-temurin:25-jre-alpine | ||
|
|
||
| WORKDIR /app | ||
|
|
||
| # might need to update this later when we have our explicit class names | ||
| COPY --from=build /app/target/app.jar app.jar | ||
| ENTRYPOINT ["java", "-jar", "app.jar"] |
There was a problem hiding this comment.
Container runs as root — add a non-root USER.
Trivy (DS-0002) flags this correctly. The final stage has no USER directive, so the JVM process runs as root inside the container.
🔒 Proposed fix
FROM eclipse-temurin:25-jre-alpine
WORKDIR /app
-# might need to update this later when we have our explicit class names
COPY --from=build /app/target/app.jar app.jar
+
+RUN addgroup -S appgroup && adduser -S appuser -G appgroup
+USER appuser
+
ENTRYPOINT ["java", "-jar", "app.jar"]The diff above also removes the stale comment on Line 15 — the JAR path is already explicit, so the comment no longer applies.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Dockerfile` around lines 11 - 17, The image currently runs the JVM as root
because the final Dockerfile stage lacks a non-root USER; create a dedicated
unprivileged user, set ownership of WORKDIR and app.jar to that user, and add a
USER directive before the ENTRYPOINT so the container launches the Java process
as that user (refer to WORKDIR, app.jar, and ENTRYPOINT in the Dockerfile to
locate where to set ownership and add USER). Ensure the created user's
home/permissions allow running the JVM and that any file operations use that
user's UID/GID to avoid running as root.
| <dependency> | ||
| <groupId>org.testcontainers</groupId> | ||
| <artifactId>testcontainers</artifactId> | ||
| <version>2.0.3</version> | ||
| <scope>test</scope> | ||
| </dependency> | ||
| <dependency> | ||
| <groupId>org.testcontainers</groupId> | ||
| <artifactId>junit-jupiter</artifactId> | ||
| <version>1.21.4</version> | ||
| <scope>test</scope> | ||
| </dependency> |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -name "pom.xml" -type f | head -5Repository: ithsjava25/ithsjava25-java-project-webserver-maven-java-template
Length of output: 117
🏁 Script executed:
cat -n pom.xml | sed -n '40,60p'Repository: ithsjava25/ithsjava25-java-project-webserver-maven-java-template
Length of output: 977
🌐 Web query:
testcontainers 2.0.3 junit-jupiter 1.21.4 compatibility
💡 Result:
org.testcontainers:junit-jupiter:1.21.4 is not meant to be used with org.testcontainers:testcontainers:2.0.3.
- The 1.21.4 JUnit 5 module (
org.testcontainers:junit-jupiter) has a direct dependency onorg.testcontainers:testcontainers:1.21.4(same version line). [1] - For Testcontainers 2.0.3, the matching JUnit 5 integration is
org.testcontainers:testcontainers-junit-jupiter:2.0.3, which depends onorg.testcontainers:testcontainers:2.0.3. [2]
So you should align versions, e.g.:
- Option A (1.x line):
testcontainers:1.21.4+junit-jupiter:1.21.4[1] - Option B (2.x line):
testcontainers:2.0.3+testcontainers-junit-jupiter:2.0.3[2]
Sources:
[1] org.testcontainers:junit-jupiter:1.21.4 POM (shows dependency on testcontainers:1.21.4) (central.sonatype.com)
[2] org.testcontainers:testcontainers-junit-jupiter:2.0.3 POM (shows dependency on testcontainers:2.0.3) (central.sonatype.com)
Citations:
- 1: https://central.sonatype.com/artifact/org.testcontainers/junit-jupiter/1.21.4
- 2: https://central.sonatype.com/artifact/org.testcontainers/testcontainers-junit-jupiter/2.0.3
Testcontainers version mismatch: testcontainers:2.0.3 vs junit-jupiter:1.21.4.
The testcontainers artifact uses version 2.0.3, but the junit-jupiter integration artifact from org.testcontainers is pinned to version 1.21.4. The junit-jupiter:1.21.4 module depends on testcontainers:1.21.4, creating a transitive dependency conflict. For Testcontainers 2.x, the correct integration module is testcontainers-junit-jupiter (note the name change), not junit-jupiter.
Align to one of these configurations:
- Option A (1.x line): Downgrade
testcontainersto1.21.4 - Option B (2.x line): Replace
junit-jupiter:1.21.4withtestcontainers-junit-jupiter:2.0.3
🐛 Proposed fix — use 2.x line
<dependency>
<groupId>org.testcontainers</groupId>
- <artifactId>junit-jupiter</artifactId>
+ <artifactId>testcontainers-junit-jupiter</artifactId>
- <version>1.21.4</version>
+ <version>2.0.3</version>
<scope>test</scope>
</dependency>📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| <dependency> | |
| <groupId>org.testcontainers</groupId> | |
| <artifactId>testcontainers</artifactId> | |
| <version>2.0.3</version> | |
| <scope>test</scope> | |
| </dependency> | |
| <dependency> | |
| <groupId>org.testcontainers</groupId> | |
| <artifactId>junit-jupiter</artifactId> | |
| <version>1.21.4</version> | |
| <scope>test</scope> | |
| </dependency> | |
| <dependency> | |
| <groupId>org.testcontainers</groupId> | |
| <artifactId>testcontainers</artifactId> | |
| <version>2.0.3</version> | |
| <scope>test</scope> | |
| </dependency> | |
| <dependency> | |
| <groupId>org.testcontainers</groupId> | |
| <artifactId>testcontainers-junit-jupiter</artifactId> | |
| <version>2.0.3</version> | |
| <scope>test</scope> | |
| </dependency> |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pom.xml` around lines 43 - 54, The pom currently mixes org.testcontainers
artifacts with mismatched versions: artifactId "testcontainers" at version 2.0.3
and "junit-jupiter" at 1.21.4, which creates a transitive conflict; fix by
aligning to one line — either downgrade "testcontainers" to 1.21.4 to match
"junit-jupiter" OR (recommended) replace the dependency with artifactId
"testcontainers-junit-jupiter" at version 2.0.3 (and remove the old
"junit-jupiter" entry) so both integration and core use version 2.0.3
consistently.
| @Override | ||
| public void run() { | ||
| try (socket) { | ||
| var in = socket.getInputStream(); | ||
| var out = socket.getOutputStream(); | ||
|
|
||
| HttpRequest parsed = httpParser.parse(in); | ||
| String remoteIp = socket.getInetAddress().getHostAddress(); | ||
|
|
||
| HttpRequest request = new HttpRequest( | ||
| parsed.method(), | ||
| parsed.path(), | ||
| parsed.queryString(), | ||
| parsed.httpVersion(), | ||
| parsed.headers(), | ||
| parsed.body(), | ||
| remoteIp | ||
| ); | ||
|
|
||
| HttpResponse response = new HttpResponse( | ||
| 200, | ||
| "OK", | ||
| java.util.Map.of(), | ||
| new byte[0] | ||
| ); | ||
|
|
||
| var chain = pipeline.createChain(request); | ||
| chain.doFilter(request, response); | ||
|
|
||
| HttpResponseWriter.write(out, response); | ||
|
|
||
| } catch (IOException e) { | ||
| logger.log(Level.SEVERE, "Error while handling request", e); | ||
| } | ||
| } |
There was a problem hiding this comment.
No HTTP error response is written to the client when IOException is thrown.
If httpParser.parse(in) throws (e.g., empty body, malformed request line — common from scanners or abruptly-closed connections), the catch block only logs and the socket closes silently. The client receives a bare TCP FIN with no HTTP status line. At minimum, a 400 Bad Request should be sent on parse failures and a 500 Internal Server Error on unexpected pipeline failures.
🛡️ Proposed fix
`@Override`
public void run() {
- try (socket) {
+ try (socket;
+ var out = socket.getOutputStream()) {
+ HttpResponse response = new HttpResponse(200, "OK", new java.util.HashMap<>(), new byte[0]);
var in = socket.getInputStream();
- var out = socket.getOutputStream();
HttpRequest parsed;
+ try {
parsed = httpParser.parse(in);
+ } catch (IOException e) {
+ logger.log(Level.WARNING, "Malformed request from " + socket.getInetAddress(), e);
+ response.setStatusCode(400);
+ response.setStatusText("Bad Request");
+ response.setBody("400 - Bad Request".getBytes(java.nio.charset.StandardCharsets.UTF_8));
+ HttpResponseWriter.write(out, response);
+ return;
+ }
String remoteIp = socket.getInetAddress().getHostAddress();
HttpRequest request = new HttpRequest(
parsed.method(), parsed.path(), parsed.queryString(),
parsed.httpVersion(), parsed.headers(), parsed.body(), remoteIp
);
- HttpResponse response = new HttpResponse(200, "OK", java.util.Map.of(), new byte[0]);
+
var chain = pipeline.createChain(request);
chain.doFilter(request, response);
HttpResponseWriter.write(out, response);
} catch (IOException e) {
- logger.log(Level.SEVERE, "Error while handling request", e);
+ logger.log(Level.SEVERE, "Unhandled I/O error during request processing", e);
}
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/main/java/org/juv25d/ConnectionHandler.java` around lines 26 - 60, The
run() method currently logs IOExceptions but never writes an HTTP error to the
socket; change the control flow so parse errors produce a 400 and
pipeline/unexpected errors produce a 500: specifically, wrap the call to
httpParser.parse(in) in its own try/catch that on IOException constructs an
HttpResponse(400, "Bad Request", Map.of(), errorBodyBytes) and calls
HttpResponseWriter.write(out, response) before closing; then wrap
pipeline.createChain(request) / chain.doFilter(...) in a separate try/catch that
catches Exception, builds an HttpResponse(500, "Internal Server Error",
Map.of(), errorBodyBytes) and writes it via HttpResponseWriter.write(out,
response); ensure all writes are guarded (catch secondary IOExceptions and log
via logger) and keep references to the existing symbols (run, httpParser.parse,
pipeline.createChain, chain.doFilter, HttpResponseWriter.write, HttpResponse)
when implementing.
| public void addGlobalFilter(Filter filter, int order) { | ||
| globalFilters.add(new FilterRegistration(filter, order, null)); | ||
| sortedGlobalFilters = globalFilters.stream() | ||
| .sorted() | ||
| .map(FilterRegistration::filter) | ||
| .collect(Collectors.toUnmodifiableList()); | ||
| } |
There was a problem hiding this comment.
Non-atomic compound operation in addGlobalFilter races on sortedGlobalFilters.
CopyOnWriteArrayList.add() and the subsequent stream-to-sorted-list assignment are two separate steps. Consider this interleaving:
- Thread A:
globalFilters.add(fa)completes;CopyOnWriteArrayListsnapshot is[fa]. - Thread B:
globalFilters.add(fb)completes; snapshot is[fa, fb]. - Thread B: streams
[fa, fb]→ writessortedGlobalFilters = [fa, fb]. - Thread A: streams its old snapshot
[fa]→ writessortedGlobalFilters = [fa].
fb is now permanently absent from sortedGlobalFilters even though it is in globalFilters. The volatile write in step 4 publishes the stale list to all threads. The fix is to synchronize the compound add+sort:
🛡️ Proposed fix
- public void addGlobalFilter(Filter filter, int order) {
- globalFilters.add(new FilterRegistration(filter, order, null));
- sortedGlobalFilters = globalFilters.stream()
- .sorted()
- .map(FilterRegistration::filter)
- .collect(Collectors.toUnmodifiableList());
- }
+ public synchronized void addGlobalFilter(Filter filter, int order) {
+ globalFilters.add(new FilterRegistration(filter, order, null));
+ sortedGlobalFilters = globalFilters.stream()
+ .sorted()
+ .map(FilterRegistration::filter)
+ .collect(Collectors.toUnmodifiableList());
+ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/main/java/org/juv25d/Pipeline.java` around lines 23 - 29, addGlobalFilter
currently performs a non-atomic two-step update (globalFilters.add(...) then
recompute sortedGlobalFilters) which can race and publish a stale
sortedGlobalFilters; make the compound operation atomic by synchronizing the
add+recompute block (e.g., synchronize on a dedicated lock or the Pipeline
instance) so that globalFilters.add(new FilterRegistration(...)) and the
stream/sort that assigns sortedGlobalFilters execute together; ensure you
reference the same lock in any other code paths that update sortedGlobalFilters
(use the same synchronization around operations that modify globalFilters or
sortedGlobalFilters) to prevent the race.
| public FilterChainImpl createChain(HttpRequest request) { | ||
| List<Filter> filters = new ArrayList<>(); | ||
| filters.addAll(sortedGlobalFilters); | ||
| String path = request.path(); | ||
| List<FilterRegistration> exactMatches = routeFilters.get(path); | ||
| if (exactMatches != null) { | ||
| exactMatches.stream() | ||
| .sorted() | ||
| .map(FilterRegistration::filter) | ||
| .forEach(filters::add); | ||
| } | ||
|
|
||
| for (Map.Entry<String, List<FilterRegistration>> entry : routeFilters.entrySet()) { | ||
| String pattern = entry.getKey(); | ||
| if (pattern.endsWith("*") && path.startsWith(pattern.substring(0, pattern.length() - 1))) { | ||
| entry.getValue().stream() | ||
| .sorted() | ||
| .map(FilterRegistration::filter) | ||
| .forEach(filters::add); | ||
| } | ||
| } | ||
| return new FilterChainImpl(filters, router); // Pass router instead of plugin |
There was a problem hiding this comment.
createChain passes a potentially-null router to FilterChainImpl, causing a deferred NPE.
router is null until setRouter() is called. When createChain builds a FilterChainImpl with a null router and all filters are exhausted, FilterChainImpl.doFilter calls router.resolve(req) and throws a NullPointerException deep inside the request-handling path — far from the actual misconfiguration. A fail-fast guard surfaces the problem immediately at chain creation time.
🛡️ Proposed fix
public FilterChainImpl createChain(HttpRequest request) {
+ if (router == null) {
+ throw new IllegalStateException("Router has not been configured. Call setRouter() before processing requests.");
+ }
List<Filter> filters = new ArrayList<>();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/main/java/org/juv25d/Pipeline.java` around lines 43 - 64, createChain
currently constructs a FilterChainImpl with a possibly-null router which leads
to a deferred NPE in FilterChainImpl.doFilter when router.resolve(req) is
invoked; update createChain to validate that the field router is non-null (and
if null, throw an IllegalStateException with a clear message) before returning
new FilterChainImpl(filters, router), so callers see a fail-fast error when
setRouter() hasn't been called rather than a deep NPE; reference createChain,
router, setRouter, and FilterChainImpl.doFilter in the change.
| Map<String, Object> serverConfig = (Map<String, Object>) config.get("server"); | ||
| if (serverConfig != null) { | ||
| this.port = (Integer) serverConfig.getOrDefault("port", 8080); | ||
| this.rootDirectory = (String) serverConfig.getOrDefault("root-dir", "static"); | ||
| } | ||
|
|
||
| // logging | ||
| Map<String, Object> loggingConfig = (Map<String, Object>) config.get("logging"); | ||
| if (loggingConfig != null) { | ||
| this.logLevel = (String) loggingConfig.get("level"); | ||
| } |
There was a problem hiding this comment.
Defaults not applied when server or logging keys are absent from YAML.
If the YAML file exists but lacks the server key, serverConfig will be null and this block is skipped entirely — leaving port at 0 and rootDirectory at null instead of the intended defaults (8080, "static"). Consider initializing the fields at declaration or applying defaults after the conditional blocks.
🐛 Proposed fix — initialize fields with defaults
public class ConfigLoader {
private static ConfigLoader instance;
- private int port;
- private String logLevel;
- private String rootDirectory;
+ private int port = 8080;
+ private String logLevel;
+ private String rootDirectory = "static";🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/main/java/org/juv25d/util/ConfigLoader.java` around lines 36 - 46, The
ConfigLoader currently skips applying defaults when server or logging sections
are missing: ensure port, rootDirectory and logLevel have sane defaults by
initializing the fields (port = 8080, rootDirectory = "static", logLevel =
"INFO" or your desired default) on declaration or by assigning these defaults
after the null checks; specifically update the ConfigLoader class fields (port,
rootDirectory, logLevel) or add a fallback assignment after the serverConfig /
loggingConfig blocks so that when serverConfig == null or loggingConfig == null
the default values are preserved, and keep existing casts and getOrDefault logic
for when the maps are present.
| } catch (Exception e) { | ||
| throw new RuntimeException("Failed to load application config"); | ||
| } |
There was a problem hiding this comment.
Exception cause is swallowed — chain the original exception.
new RuntimeException("Failed to load application config") discards e, making it very hard to diagnose the root cause (missing file, parse error, class cast, etc.). Also note this catch wraps the intentional IllegalArgumentException from Line 30 with a less informative message.
🐛 Proposed fix
} catch (Exception e) {
- throw new RuntimeException("Failed to load application config");
+ throw new RuntimeException("Failed to load application config", e);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| } catch (Exception e) { | |
| throw new RuntimeException("Failed to load application config"); | |
| } | |
| } catch (Exception e) { | |
| throw new RuntimeException("Failed to load application config", e); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/main/java/org/juv25d/util/ConfigLoader.java` around lines 48 - 50, The
catch block in ConfigLoader that currently throws new RuntimeException("Failed
to load application config") swallows the original exception; update the catch
to preserve the root cause by either rethrowing the original exception when it
is an IllegalArgumentException from validateConfig (or other specific known
types) or by chaining it as the cause when creating the RuntimeException (e.g.,
new RuntimeException("Failed to load application config", e)); locate the catch
in ConfigLoader and implement one of these changes so the original exception (e)
is not lost.
| @Container | ||
| @SuppressWarnings("resource") | ||
| public static GenericContainer<?> server = new GenericContainer<>( | ||
| new ImageFromDockerfile("java-http-server-test") | ||
| .withFileFromPath(".", Paths.get(".")) | ||
| ).withExposedPorts(8080); |
There was a problem hiding this comment.
Missing container readiness check — tests will be flaky under cold-start conditions.
GenericContainer starts without a .waitingFor(...) strategy. The container may not be accepting HTTP connections by the time shouldReturnIndexHtml fires (Docker image build + JVM startup can easily exceed the time before the first client.send()), causing intermittent ConnectException failures.
Add a Testcontainers Wait strategy:
🔧 Proposed fix
+import org.testcontainers.containers.wait.strategy.Wait;
...
`@Container`
`@SuppressWarnings`("resource")
public static GenericContainer<?> server = new GenericContainer<>(
new ImageFromDockerfile("java-http-server-test")
.withFileFromPath(".", Paths.get("."))
- ).withExposedPorts(8080);
+ ).withExposedPorts(8080)
+ .waitingFor(Wait.forHttp("/").forStatusCode(200));📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| @Container | |
| @SuppressWarnings("resource") | |
| public static GenericContainer<?> server = new GenericContainer<>( | |
| new ImageFromDockerfile("java-http-server-test") | |
| .withFileFromPath(".", Paths.get(".")) | |
| ).withExposedPorts(8080); | |
| `@Container` | |
| `@SuppressWarnings`("resource") | |
| public static GenericContainer<?> server = new GenericContainer<>( | |
| new ImageFromDockerfile("java-http-server-test") | |
| .withFileFromPath(".", Paths.get(".")) | |
| ).withExposedPorts(8080) | |
| .waitingFor(Wait.forHttp("/").forStatusCode(200)); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/test/java/org/juv25d/AppIT.java` around lines 20 - 25, The
GenericContainer "server" starts without a readiness strategy causing flaky
ConnectException in shouldReturnIndexHtml; update the server container
initialization (GenericContainer<?> server / ImageFromDockerfile) to include a
Testcontainers Wait strategy (e.g., waitingFor(Wait.forHttp("/") or a specific
health endpoint and expected status code) and set a reasonable startup timeout
via withStartupTimeout(Duration.ofSeconds(...))) so the test waits for the HTTP
endpoint to be ready before client.send() is invoked.
Summary
This PR introduces integration testing framework using Testcontainers. It allows the project to automatically build the Docker image and run real HTTP requests against the server in a containerized environment before publishing.
A Key benefit using Testcontainers is taht all Docker containers and resoruces are automatically cleaned up after the tests finish, ensuring no stale containers are left running on the developer machine.
Changes
Test plan
Run the following command to verify the full build and integrations tests:
./mvnw verify -Dfailsafe.useFile=falseVerify that the AppIT starts the Docker container, all tests pass with a BUILD SUCCESS, and the container is automatically removed.
Keep in mind that docker needs to be running on computer when running the above command.
Summary by CodeRabbit
New Features
Infrastructure
Documentation