-
Notifications
You must be signed in to change notification settings - Fork 95
MVP Bun runtime for AWS and local #270
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
📝 WalkthroughWalkthroughAdds first-class Bun runtime support across benchmarks, runtimes, storage wrappers, Docker images, packaging, and CLI, including Bun implementations for sleep, dynamic HTML, uploader, and thumbnailer, plus AWS and local Bun runtime/installer files and framework integration. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant LambdaAPI as Lambda Runtime API
participant Runtime as bun/runtime.js
participant Handler as benchmarks/wrappers/aws/bun/handler.js
participant Function as ./function/function.js
participant Storage as benchmarks/wrappers/aws/bun/storage.js
Note over LambdaAPI,Runtime: New invocation available
LambdaAPI->>Runtime: GET /2018-06-01/runtime/invocation/next
Runtime->>Handler: invoke handler(event, context)
Handler->>Function: require and call function.handler(event)
alt function uses storage
Function->>Storage: upload/download/stream calls
Storage-->>Function: promise / stream handles
end
Handler-->>Runtime: return response (status 200 + body)
Runtime->>LambdaAPI: POST /invocation/{id}/response
Runtime-->>LambdaAPI: on error POST /invocation/{id}/error
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 18
🤖 Fix all issues with AI agents
In @benchmarks/100.webapps/110.dynamic-html/bun/templates/template.html:
- Line 6: The link tag in template.html uses an insecure HTTP Bootstrap CDN and
an outdated version; update the href on the <link> element to use HTTPS
(https://) and replace the v3.0.0 CDN URL with a current, supported Bootstrap
release (for example a Bootstrap 5 CDN URL) to eliminate mixed-content warnings
and address known vulnerabilities.
In @benchmarks/100.webapps/120.uploader/bun/function.js:
- Around line 24-26: The HTTP request stream created by request(url) is missing
error handling which can cause unhandled rejections; attach 'error' listeners to
the request stream and the file stream used by
fs.createWriteStream(download_path) and ensure streamToPromise(file) is rejected
on those errors (e.g., add request.on('error', ...) and file.on('error', ...)
and propagate/abort the other stream as needed); also handle non-2xx HTTP
responses from the request stream (check 'response' or statusCode on the
request) and reject the promise with a clear error so callers of
streamToPromise(file) get proper failure information.
- Around line 27-35: keyName is declared but only set inside the async promise
callback (via storage_handler.upload), so if the upload fails you return
undefined; initialize keyName to a safe default (e.g., null or empty string) or
restructure to capture the upload result directly: call const [resultKey,
uploadPromise] = storage_handler.upload(bucket, path.join(output_prefix,
upload_key), download_path); await uploadPromise; then set keyName = resultKey
(or wrap in try/catch to handle errors and set a fallback) before returning
{bucket, url, key: keyName}; ensure references to storage_handler.upload,
keyName, upload/promise are updated accordingly.
In @benchmarks/100.webapps/120.uploader/bun/package.json:
- Line 8: Remove the deprecated "request" dependency from package.json and
replace any usage of request(url).pipe(file) with Bun's native fetch streaming:
perform fetch(url), check response.ok, then stream the response body to a
writable file stream via response.body.pipeTo(writableStream) (or use
Bun.file/Bun.write APIs to create the writable stream). Update any code that
imports or references the "request" module to use this fetch-based download flow
instead.
In @benchmarks/200.multimedia/210.thumbnailer/bun/function.js:
- Around line 9-14: The assignments in the thumbnailer entry (the block that
sets bucket = event.bucket.bucket, input_prefix = event.bucket.input,
output_prefix = event.bucket.output, and width/height from event.object) are
missing declarations and create implicit globals; fix by declaring them with
appropriate keywords (e.g., const bucket = event.bucket.bucket, const
input_prefix = event.bucket.input, const output_prefix = event.bucket.output,
and use const or let for width and height depending on whether they are
reassigned) in the same location (the top of the function handling the event) so
they are block-scoped and do not pollute the global scope.
In @benchmarks/200.multimedia/210.thumbnailer/bun/package.json:
- Around line 7-9: The sharp dependency ("sharp": "0.32.6") needs Bun to allow
its postinstall lifecycle script; either add "sharp" to package.json's
trustedDependencies array (so Bun runs postinstall) or instruct the environment
to run `bun pm trust sharp` before install, and if installation still fails
check install logs and configure sharp_dist_base_url or build libvips locally
for custom binary hosts; update package.json or CI setup accordingly to ensure
sharp's postinstall executes under Bun.
In @benchmarks/wrappers/aws/bun/handler.js:
- Line 15: Wrap the JSON.parse call that assigns input_data when http_trigger is
true in a try/catch to handle malformed JSON; catch the SyntaxError from parsing
event.body, log or record the parsing error, and return a proper HTTP 400/Bad
Request response (or set an appropriate error payload) instead of letting the
handler throw. Specifically, update the code around the input_data assignment
(the http_trigger conditional that uses JSON.parse(event.body)) to try parsing,
catch errors, and handle them with a clear error response and logging.
- Line 16: The require('./function/function') call is inside the handler and
causes per-invocation overhead; move it to module scope by hoisting the var func
= require('./function/function') to the top of the file (outside the exported
handler function) so the module is loaded once and reused across warm AWS Lambda
invocations, leaving the handler (e.g., the exported function that currently
references func) unchanged.
In @benchmarks/wrappers/aws/bun/runtime.js:
- Around line 15-17: The fetch call to `${API_BASE}/invocation/next` (producing
nextResponse, event, requestId) lacks error handling and can crash the runtime
loop; wrap the fetch and subsequent nextResponse.json()/headers access in a
try/catch, detect non-OK HTTP responses (nextResponse.ok) and throw or handle
them, and on any error log the failure (including the error and request URL),
optionally wait/retry or continue the loop so the runtime doesn't exit; ensure
the code that uses event and requestId only runs when nextResponse was
successfully retrieved and parsed.
- Around line 25-38: The POSTs to `${API_BASE}/invocation/${requestId}/response`
and `${API_BASE}/invocation/${requestId}/error` are awaited without error
handling so any network failure will throw and crash the runtime; wrap each
await fetch that sends the response and the error payload in its own try/catch
(or append a .catch) so failures are logged (include requestId and
endpoint/context) but do not rethrow, ensuring the runtime continues processing
subsequent invocations; reference the API_BASE variable, the requestId, response
object and the caught error to produce clear log messages.
In @benchmarks/wrappers/aws/bun/storage.js:
- Around line 29-32: The download function is shadowing the parameter file with
a local var file (the write stream), so this.S3.getObject is receiving the
WriteStream instead of the S3 key; rename the local write stream variable (e.g.,
writeStream or dest) and use the original file parameter as the Key in the
this.S3.getObject call, and prefer let/const for the stream variable in the
download method to avoid future shadowing.
In @benchmarks/wrappers/local/bun/storage.js:
- Around line 52-57: get_instance currently tries to construct a new storage()
which is undefined and will throw; change the constructor call to use the
correct class name minio_storage so the singleton stores an instance of
minio_storage (i.e., replace new storage() with new minio_storage), leaving
this.instance logic intact.
- Around line 40-45: In uploadStream, write_stream is a PassThrough and has no
size property, so remove the undefined size argument from the
this.client.putObject call (or explicitly pass undefined if you prefer); locate
uploadStream and change the this.client.putObject(bucket, uniqueName,
write_stream, write_stream.size) invocation to call putObject without the size
parameter (or with explicit undefined) to make the intent clear.
In @dockerfiles/bun_installer.sh:
- Line 22: The script currently runs the command "rm -r bun.lock" which will
error if bun.lock is absent; update that invocation to use the forced remove
flag instead (use -f rather than -r) so removing bun.lock is safe when the file
doesn't exist.
- Line 24: The bootstrap script writes a faulty shell conditional that tries to
execute $LAMBDA_TASK_ROOT; update the echoed script so the if checks the
variable correctly (e.g., use [ -n "$LAMBDA_TASK_ROOT" ] or [ -d
"$LAMBDA_TASK_ROOT" ]), and ensure the conditional uses proper shell syntax with
then/fi around cd $LAMBDA_TASK_ROOT; keep the rest of the bootstrap invocation
(./bun --bun runtime.js) unchanged and ensure quotes are preserved when echoing
the script.
- Line 3: The script currently runs the command `cd /mnt/function` without
checking its result; update the script to verify the directory change succeeded
before continuing by testing the exit status of the `cd /mnt/function` command
and, on failure, print a clear error message and exit with a non-zero status so
subsequent commands do not run in the wrong directory.
In @dockerfiles/local/bun/run_server.sh:
- Around line 3-4: The script run_server.sh currently runs "cd /sebs" without
checking for failure, risking starting bun /sebs/server.js in the wrong
directory; update the script to check the exit status of the cd command (or use
a conditional like "cd /sebs || exit 1") before running bun /sebs/server.js "$@"
so the script aborts with a non-zero exit code if changing to /sebs fails and
does not execute the server in an unexpected location.
In @dockerfiles/local/bun/server.js:
- Around line 44-46: The code is using process.argv[2] directly in app.listen
(app.listen(port=process.argv[2], ...)) without validation; parse
process.argv[2] into an integer (e.g., const port = parseInt(process.argv[2],
10)), verify it's a finite number within the valid port range (1–65535), and if
invalid either exit with a clear error via console.error/process.exit(1) or fall
back to a safe default port; then pass the validated port variable to app.listen
and use that same variable in the console.log.
🧹 Nitpick comments (13)
benchmarks/000.microbenchmarks/010.sleep/bun/package.json (1)
1-9: Consider populating package metadata fields.The package.json contains empty metadata fields (name, description, author, license). While this may be intentional for benchmark code, providing at least a meaningful name could improve maintainability and clarity.
📝 Example metadata
{ - "name": "", + "name": "sebs-benchmark-sleep-bun", "version": "1.0.0", - "description": "", + "description": "Sleep benchmark for Bun runtime", "author": "", - "license": "", + "license": "MIT", "dependencies": { } }benchmarks/100.webapps/110.dynamic-html/bun/package.json (1)
1-6: Consider populating package metadata fields.Similar to the sleep benchmark, this package.json contains empty metadata fields. Providing meaningful values could improve maintainability.
📝 Example metadata
{ - "name": "", + "name": "sebs-benchmark-dynamic-html-bun", "version": "1.0.0", - "description": "", + "description": "Dynamic HTML benchmark for Bun runtime", "author": "", - "license": "", + "license": "MIT", "dependencies": { "mustache": "^3.2.1" } }docs/modularity.md (1)
200-200: Consider improving sentence clarity.The note provides helpful troubleshooting information but could be more grammatically complete.
✏️ Suggested improvement
-*Note: Building the docker files might fail to the scylladb volume having the wrong permissions. Can be circumvented using `chown -R <your user>:<your user> scylladb-volume`.* +*Note: Building the docker files might fail due to the scylladb volume having the wrong permissions. This can be circumvented using `chown -R <your user>:<your user> scylladb-volume`.*benchmarks/200.multimedia/210.thumbnailer/bun/package.json (1)
1-6: Consider adding basic metadata for benchmark clarity.While empty metadata fields may be acceptable for internal benchmarks, consider adding at least a descriptive name (e.g., "thumbnailer-benchmark") and license field to maintain consistency with project conventions.
benchmarks/000.microbenchmarks/010.sleep/bun/function.js (1)
4-4: Preferconstovervarfor immutable variable.Using
constinstead ofvarfor thesleepvariable provides better scoping and signals intent that the value won't be reassigned.♻️ Proposed refactor
- var sleep = event.sleep; + const sleep = event.sleep;benchmarks/200.multimedia/210.thumbnailer/bun/function.js (2)
16-16: Usesubstring()instead of deprecatedsubstr().The
substr()method is deprecated. Usesubstring()orslice()instead for better compatibility and future-proofing.♻️ Proposed refactor
- let upload_key = key.substr(0, pos < 0 ? key.length : pos) + '.png'; + let upload_key = key.substring(0, pos < 0 ? key.length : pos) + '.png';
19-26: Consider adding error handling for stream operations.The stream piping and promise handling lack explicit error handling. Stream failures (network issues, disk errors, invalid images) or sharp processing errors could result in unhandled rejections or hung promises.
Consider wrapping the operations in try-catch and/or adding error handlers to the streams and promises to ensure graceful failure handling and proper resource cleanup.
benchmarks/100.webapps/110.dynamic-html/bun/init.sh (1)
1-10: Consider adding defensive checks for robustness.The script works correctly but could benefit from defensive programming practices:
- Validate that
DIRargument is provided and valid- Verify the source templates directory exists before copying
- Quote variables to handle paths with spaces safely
♻️ Proposed improvements
#!/bin/bash + +set -euo pipefail DIR=$1 VERBOSE=$2 + +if [ -z "$DIR" ]; then + echo "Error: DIR argument is required" >&2 + exit 1 +fi + SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )" path="${SCRIPT_DIR}/templates/" + +if [ ! -d "$path" ]; then + echo "Error: Templates directory not found at $path" >&2 + exit 1 +fi + if [ "$VERBOSE" = true ]; then echo "Update ${DIR} with static templates ${path}" fi -cp -r ${SCRIPT_DIR}/templates ${DIR} +cp -r "${SCRIPT_DIR}/templates" "${DIR}"benchmarks/wrappers/aws/bun/runtime.js (1)
19-20: Consider expanding context object with additional Lambda fields.The context currently only includes
awsRequestId. For compatibility with standard Lambda handlers, consider adding other common fields likefunctionName,functionVersion,memoryLimitInMB, etc., which are available from environment variables.📋 Optional: Expand context with standard Lambda fields
// NOTE: If more context is needed inside the handler, they can be added here - const context = { awsRequestId: requestId }; + const context = { + awsRequestId: requestId, + functionName: process.env.AWS_LAMBDA_FUNCTION_NAME, + functionVersion: process.env.AWS_LAMBDA_FUNCTION_VERSION, + memoryLimitInMB: process.env.AWS_LAMBDA_FUNCTION_MEMORY_SIZE, + logGroupName: process.env.AWS_LAMBDA_LOG_GROUP_NAME, + logStreamName: process.env.AWS_LAMBDA_LOG_STREAM_NAME, + };benchmarks/100.webapps/110.dynamic-html/bun/function.js (1)
9-28: Consider using modern fs.promises API for cleaner async/await.The handler correctly wraps callback-based
fs.readFilein a Promise, but Bun supports the modernfs.promisesAPI which would result in cleaner, more maintainable code.♻️ Proposed refactor using fs.promises
-const Mustache = require('mustache'), - fs = require('fs'), - path = require('path'); +const Mustache = require('mustache'); +const fs = require('fs').promises; +const path = require('path'); function random(b, e) { return Math.round(Math.random() * (e - b) + b); } exports.handler = async function(event) { var random_numbers = new Array(event.random_len); for(var i = 0; i < event.random_len; ++i) { random_numbers[i] = random(0, 100); } var input = { cur_time: new Date().toLocaleString(), username: event.username, random_numbers: random_numbers }; var file = path.resolve(__dirname, 'templates', 'template.html'); - return new Promise((resolve, reject) => { - fs.readFile(file, "utf-8", - function(err, data) { - if(err) reject(err); - resolve(Mustache.render(data, input)); - } - ); - }); + const data = await fs.readFile(file, "utf-8"); + return Mustache.render(data, input); };dockerfiles/local/bun/Dockerfile.run (1)
4-10: Consider copyinggosufromtianon/gosuimage for consistency.The PR description states that
gosuis copied from a Docker image to pin the version, but this Dockerfile installs it viaapt-get. TheDockerfile.buildfiles useCOPY --from=tianon/gosu:1.19-debian, which pins the version and works with minimal base images.♻️ Proposed fix for consistency
-ARG BASE_IMAGE -FROM ${BASE_IMAGE} +ARG BASE_IMAGE +FROM ${BASE_IMAGE} + +COPY --from=tianon/gosu:1.19-debian /usr/local/bin/gosu /usr/local/bin/gosuThen remove
gosufrom the apt-get install line:ARG DEBIAN_FRONTEND=noninteractive RUN deps=''\ && apt-get update\ - && apt-get install -y --no-install-recommends curl net-tools gosu python3 sudo ${deps}\ + && apt-get install -y --no-install-recommends curl net-tools python3 sudo ${deps}\ && apt-get purge -y --auto-remove ${deps}dockerfiles/local/bun/server.js (1)
1-6: Remove unused import and commented code.The
httpmodule is imported but never used. Additionally, line 5 contains commented-out ES6 import syntax that should be removed.♻️ Proposed cleanup
-const http = require('http'), - strftime = require('strftime'), +const strftime = require('strftime'), express = require('express'), f = require('/function/function/function'); -//import { v4 as uuidv4 } from 'uuid'; const { v4: uuidv4 } = require('uuid');benchmarks/wrappers/local/bun/storage.js (1)
47-50: Remove unused variable.The
read_streamPassThrough stream is created but never used. The method directly returns the result ofgetObject(), making this variable unnecessary.♻️ Proposed cleanup
downloadStream(bucket, file) { - var read_stream = new stream.PassThrough(); return this.client.getObject(bucket, file); };
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (37)
.gitignorebenchmarks/000.microbenchmarks/010.sleep/bun/function.jsbenchmarks/000.microbenchmarks/010.sleep/bun/package.jsonbenchmarks/000.microbenchmarks/010.sleep/config.jsonbenchmarks/100.webapps/110.dynamic-html/bun/function.jsbenchmarks/100.webapps/110.dynamic-html/bun/init.shbenchmarks/100.webapps/110.dynamic-html/bun/package.jsonbenchmarks/100.webapps/110.dynamic-html/bun/templates/template.htmlbenchmarks/100.webapps/110.dynamic-html/config.jsonbenchmarks/100.webapps/120.uploader/bun/function.jsbenchmarks/100.webapps/120.uploader/bun/package.jsonbenchmarks/100.webapps/120.uploader/config.jsonbenchmarks/200.multimedia/210.thumbnailer/bun/function.jsbenchmarks/200.multimedia/210.thumbnailer/bun/package.jsonbenchmarks/200.multimedia/210.thumbnailer/config.jsonbenchmarks/wrappers/aws/bun/handler.jsbenchmarks/wrappers/aws/bun/runtime.jsbenchmarks/wrappers/aws/bun/storage.jsbenchmarks/wrappers/local/bun/storage.jsconfig/systems.jsondockerfiles/aws/bun/Dockerfile.builddockerfiles/aws/bun/Dockerfile.functiondockerfiles/bun_installer.shdockerfiles/local/bun/Dockerfile.builddockerfiles/local/bun/Dockerfile.rundockerfiles/local/bun/run_server.shdockerfiles/local/bun/runners.jsondockerfiles/local/bun/server.jsdockerfiles/local/bun/timeit.shdockerfiles/local/runner.pydocs/modularity.mdsebs.pysebs/aws/aws.pysebs/benchmark.pysebs/faas/function.pysebs/local/local.pytools/build_docker_images.py
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: mcopik
Repo: spcl/serverless-benchmarks PR: 99
File: sebs/azure/azure.py:137-0
Timestamp: 2025-06-19T16:19:09.758Z
Learning: C++ support in the serverless-benchmarks project is currently limited to AWS Lambda only. Azure Functions implementation (sebs/azure/azure.py) does not support C++ and only supports Python and Node.js languages.
Learnt from: mcopik
Repo: spcl/serverless-benchmarks PR: 99
File: benchmarks/000.microbenchmarks/010.sleep/cpp/main.cpp:11-0
Timestamp: 2025-06-19T16:15:01.811Z
Learning: In the serverless-benchmarks project, input validation for benchmark functions is not required since the benchmarks are invoked directly by the maintainers who control the input values. This applies to functions like the sleep benchmark in benchmarks/000.microbenchmarks/010.sleep/cpp/main.cpp.
📚 Learning: 2025-06-19T16:15:01.811Z
Learnt from: mcopik
Repo: spcl/serverless-benchmarks PR: 99
File: benchmarks/000.microbenchmarks/010.sleep/cpp/main.cpp:11-0
Timestamp: 2025-06-19T16:15:01.811Z
Learning: In the serverless-benchmarks project, input validation for benchmark functions is not required since the benchmarks are invoked directly by the maintainers who control the input values. This applies to functions like the sleep benchmark in benchmarks/000.microbenchmarks/010.sleep/cpp/main.cpp.
Applied to files:
benchmarks/000.microbenchmarks/010.sleep/config.jsonbenchmarks/000.microbenchmarks/010.sleep/bun/function.js
🧬 Code graph analysis (10)
benchmarks/100.webapps/110.dynamic-html/bun/function.js (2)
benchmarks/100.webapps/110.dynamic-html/nodejs/function.js (1)
Mustache(1-3)benchmarks/wrappers/aws/bun/handler.js (1)
path(2-2)
benchmarks/wrappers/aws/bun/handler.js (2)
benchmarks/wrappers/aws/bun/runtime.js (2)
event(16-16)context(20-20)dockerfiles/local/bun/server.js (3)
begin(20-20)ret(21-21)end(24-24)
sebs/local/local.py (2)
sebs/config.py (2)
docker_repository(21-22)version(67-68)sebs/benchmark.py (2)
code_package(102-103)language_name(146-147)
benchmarks/100.webapps/120.uploader/bun/function.js (1)
benchmarks/200.multimedia/210.thumbnailer/bun/function.js (2)
storage_handler(5-5)upload_key(16-16)
benchmarks/000.microbenchmarks/010.sleep/bun/function.js (1)
benchmarks/wrappers/aws/bun/runtime.js (1)
event(16-16)
benchmarks/wrappers/local/bun/storage.js (1)
scripts/run_experiments.py (1)
minio_storage(312-421)
sebs/faas/function.py (2)
sebs/benchmark.py (2)
deserialize(60-68)languages(51-52)sebs/faas/config.py (2)
deserialize(36-37)deserialize(64-68)
benchmarks/200.multimedia/210.thumbnailer/bun/function.js (3)
benchmarks/wrappers/aws/bun/handler.js (1)
path(2-2)benchmarks/100.webapps/120.uploader/bun/function.js (5)
storage_handler(6-6)bucket(18-18)output_prefix(19-19)upload_key(21-21)promise(26-26)benchmarks/wrappers/aws/bun/runtime.js (1)
event(16-16)
sebs/benchmark.py (1)
sebs/faas/function.py (1)
Language(263-274)
dockerfiles/local/runner.py (1)
sebs/benchmark.py (1)
languages(51-52)
🪛 Biome (2.1.2)
benchmarks/wrappers/aws/bun/storage.js
[error] 30-30: Shouldn't redeclare 'file'. Consider to delete it or rename it.
'file' is defined here:
(lint/suspicious/noRedeclare)
🪛 LanguageTool
docs/modularity.md
[style] ~200-~200: To form a complete sentence, be sure to include a subject.
Context: ...db volume having the wrong permissions. Can be circumvented using `chown -R <your u...
(MISSING_IT_THERE)
🪛 Ruff (0.14.10)
sebs/faas/function.py
274-274: Create your own exception
(TRY002)
274-274: Avoid specifying long messages outside the exception class
(TRY003)
289-289: Create your own exception
(TRY002)
289-289: Avoid specifying long messages outside the exception class
(TRY003)
🪛 Shellcheck (0.11.0)
dockerfiles/local/bun/run_server.sh
[warning] 3-3: Use 'cd ... || exit' or 'cd ... || return' in case cd fails.
(SC2164)
dockerfiles/bun_installer.sh
[warning] 3-3: Use 'cd ... || exit' or 'cd ... || return' in case cd fails.
(SC2164)
🔇 Additional comments (34)
.gitignore (1)
192-201: LGTM!The changes appropriately extend
.gitignoreto accommodate Bun runtime support and local deployment artifacts. The new patterns for benchmark output files (out_benchmark.json,out_storage.json,local_deployment.json,experiments.json) and the.vscodeeditor config are well-organized, follow the existing style, and align with the PR's introduction of local Bun deployments.benchmarks/100.webapps/110.dynamic-html/config.json (1)
4-8: LGTM!The addition of "bun" to the supported languages is consistent with the PR objectives and aligns with similar changes across other benchmark configurations.
sebs.py (1)
67-67: LGTM!The addition of "bun" to the language choices properly integrates Bun runtime support into the CLI interface, consistent with the framework-wide changes in this PR.
benchmarks/100.webapps/110.dynamic-html/bun/package.json (1)
8-8: No action required — version 3.2.1 is secure and compatible.The specified mustache version (3.2.1) is free from known security vulnerabilities and fully compatible with Bun. Version 4.2.0 is available as an optional upgrade, but the current version is stable and secure with no reported issues.
tools/build_docker_images.py (1)
16-16: LGTM!The addition of "bun" to the language choices is straightforward and consistent with the existing pattern for Python and Node.js support.
benchmarks/200.multimedia/210.thumbnailer/config.json (1)
4-11: LGTM!The addition of "bun" to the languages array enables Bun runtime support for the thumbnailer benchmark. The multiline formatting also improves readability.
benchmarks/000.microbenchmarks/010.sleep/config.json (1)
4-8: LGTM!The addition of "bun" to the languages array is consistent with the PR's goal of enabling Bun runtime support. The multiline formatting aligns with the pattern used in other benchmark configs.
dockerfiles/local/runner.py (1)
7-7: LGTM!The addition of the 'bun' language mapping follows the existing pattern and correctly maps to the 'bun' executable. This enables the local runner to recognize and execute Bun-based benchmarks.
benchmarks/000.microbenchmarks/010.sleep/bun/function.js (1)
1-6: LGTM! Clean implementation of the sleep benchmark.The timer helper and handler logic are correct. The function properly converts seconds to milliseconds and returns a promise that resolves after the specified delay.
dockerfiles/local/bun/timeit.sh (1)
1-11: LGTM! The timing script correctly measures Bun function execution.The script properly:
- Configures time format for user/system/real time measurement
- Uses
--expose-gcflag for garbage collection control- Loads the function module and executes the handler with input from input.json
- Redirects output to the specified file
config/systems.json (2)
152-176: LGTM! Bun configuration for AWS mirrors the established pattern.The AWS Bun configuration correctly includes:
- Base images for both x64 and arm64 architectures
- Deployment files including the additional
runtime.js(necessary for custom runtime integration)- The
uuidpackage matching the Node.js configuration
68-90: Bun Docker image versions verified as available. Bothoven/bun:1.2andoven/bun:1.3exist on Docker Hub and are appropriate for use in the benchmark configuration.sebs/local/local.py (2)
127-127: LGTM! Bun packaging configuration added correctly.The Bun configuration correctly mirrors the Node.js setup with the same artifacts (handler.js, package.json, node_modules), which is appropriate since Bun is designed to be compatible with Node.js module systems.
148-152: LGTM! Container naming enhanced with version suffix.The addition of the system version suffix to the container tag improves version tracking and isolation. The format change from 3 to 4 parameters is correctly implemented.
dockerfiles/aws/bun/Dockerfile.function (1)
1-8: LGTM! Dockerfile structure is correct.The Dockerfile correctly:
- Uses parameterized base image for flexibility
- Sets up the working directory and copies necessary files
- Configures the entrypoint to run Bun with runtime.js
The
--bunflag explicitly uses Bun's runtime instead of Node.js compatibility mode, which is appropriate for a dedicated Bun deployment.benchmarks/100.webapps/120.uploader/config.json (1)
4-11: LGTM! Configuration correctly updated for Bun support.The addition of "bun" to the languages array and the multiline formatting are appropriate for enabling Bun runtime support.
dockerfiles/local/bun/runners.json (1)
1-6: LGTM! Runner configuration is properly structured.The JSON manifest correctly maps Bun runtime components to their respective executables.
benchmarks/100.webapps/110.dynamic-html/bun/function.js (1)
1-7: LGTM! Imports and helper function are correct.The CommonJS imports and random number generator are properly implemented.
sebs/faas/function.py (3)
266-266: LGTM! Bun language enum correctly added.The addition of the BUN language enum follows the established pattern and is consistent with the PR objectives.
274-274: Excellent improvement! Error messages now display the actual invalid input.Changing from
membertovalin the error messages fixes a bug where the error would display the loop variable instead of the actual invalid input value.Also applies to: 289-289
303-303: LGTM! Runtime deserialize correctly includes Bun.The addition of Bun to the runtime language mapping is correct and consistent with the new Language.BUN enum.
Note: The static analysis hints about exception types (TRY002, TRY003) suggest using custom exception classes instead of generic Exception with string messages. This is a broader pattern across the codebase and could be addressed in a future refactoring effort, but is not a blocker for this PR.
dockerfiles/local/bun/Dockerfile.build (1)
1-15: LGTM!The Dockerfile follows best practices by copying
gosufrom a pinned image version and maintains a clean structure for the Bun build environment.dockerfiles/aws/bun/Dockerfile.build (1)
1-15: LGTM!The Dockerfile maintains consistency with the local Bun build configuration and correctly pins the
gosuversion.sebs/aws/aws.py (4)
156-156: LGTM!Using the
benchmark_archivevariable directly improves code clarity and eliminates redundant path construction.
257-257: LGTM!The simplified assignment directly uses the mapped runtime value, removing unnecessary string concatenation.
140-140: Bun configuration files verified as correct.All files listed in CONFIG_FILES match the actual deployment artifacts:
bootstrapis generated bybun_installer.shbunexecutable is downloaded and installed during container buildruntime.jsandhandler.jsare part of the AWS wrapperpackage.jsonexists in each benchmark function's bun directorynode_modulesis created bybun installduring the build processThe configuration accurately reflects the deployment structure and no changes are needed.
178-179: Theprovided.al2023runtime for Bun is correctly configured.AWS Lambda's
provided.al2023is the appropriate custom runtime for deploying Bun applications. Bun officially supports this deployment method via a custom bootstrap. The implementation in the codebase includes architecture support for both arm64 and x64, with proper bootstrap and Lambda Runtime API integration.dockerfiles/bun_installer.sh (2)
5-14: LGTM!The architecture-specific handling for arm64 and x64 is implemented correctly with appropriate error handling for unsupported architectures.
17-20: LGTM!The Bun binary download and extraction logic is correct and properly cleans up the archive.
sebs/benchmark.py (1)
255-257: LGTM! Bun integration follows existing patterns.The Bun language support is consistently implemented throughout the benchmark packaging workflow, reusing the Node.js patterns (file globs, package management, build process) which is appropriate since Bun is Node.js-compatible.
Also applies to: 320-320, 411-412, 490-490
benchmarks/wrappers/aws/bun/storage.js (4)
11-13: LGTM!The constructor correctly initializes the S3 client with default configuration, which will use AWS Lambda's execution role credentials.
15-19: LGTM!The unique name generation correctly preserves the directory structure and file extension while adding a UUID segment for uniqueness.
21-27: LGTM!The upload method correctly streams the file to S3 and returns both the unique key and upload promise, allowing callers to track the operation.
34-48: LGTM!Both streaming methods are well-implemented:
uploadStreamcorrectly usesupload()rather thanputObject()for streaming data with unknown length (good explanatory comment).downloadStreamwraps the S3 read stream in a Promise for API consistency across providers (also well-documented).
| <head> | ||
| <title>Randomly generated data.</title> | ||
| <meta name="viewport" content="width=device-width, initial-scale=1.0"> | ||
| <link href="http://netdna.bootstrapcdn.com/bootstrap/3.0.0/css/bootstrap.min.css" rel="stylesheet" media="screen"> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use HTTPS for Bootstrap CDN and consider updating Bootstrap version.
The template has two issues:
- Using
http://instead ofhttps://for the Bootstrap CDN creates a mixed content vulnerability and will trigger browser warnings when the page is served over HTTPS. - Bootstrap 3.0.0 is from 2013 and has known security vulnerabilities.
🔒 Proposed fix
- <link href="http://netdna.bootstrapcdn.com/bootstrap/3.0.0/css/bootstrap.min.css" rel="stylesheet" media="screen">
+ <link href="https://cdn.jsdelivr.net/npm/bootstrap@3.4.1/dist/css/bootstrap.min.css" rel="stylesheet" integrity="sha384-HSMxcRTRxnN+Bdg0JdbxYKrThecOKuH5zCYotlSAcp1+c8xmyTe9GYg1l9a69psu" crossorigin="anonymous">Or update to Bootstrap 5 (latest stable):
- <link href="http://netdna.bootstrapcdn.com/bootstrap/3.0.0/css/bootstrap.min.css" rel="stylesheet" media="screen">
+ <link href="https://cdn.jsdelivr.net/npm/bootstrap@5.3.0/dist/css/bootstrap.min.css" rel="stylesheet" integrity="sha384-9ndCyUaIbzAi2FUVXJi0CjmCapSmO7SnpJef0486qhLnuZ2cdeRhO02iuK6FUUVM" crossorigin="anonymous">📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| <link href="http://netdna.bootstrapcdn.com/bootstrap/3.0.0/css/bootstrap.min.css" rel="stylesheet" media="screen"> | |
| <link href="https://cdn.jsdelivr.net/npm/bootstrap@3.4.1/dist/css/bootstrap.min.css" rel="stylesheet" integrity="sha384-HSMxcRTRxnN+Bdg0JdbxYKrThecOKuH5zCYotlSAcp1+c8xmyTe9GYg1l9a69psu" crossorigin="anonymous"> |
| <link href="http://netdna.bootstrapcdn.com/bootstrap/3.0.0/css/bootstrap.min.css" rel="stylesheet" media="screen"> | |
| <link href="https://cdn.jsdelivr.net/npm/bootstrap@5.3.0/dist/css/bootstrap.min.css" rel="stylesheet" integrity="sha384-9ndCyUaIbzAi2FUVXJi0CjmCapSmO7SnpJef0486qhLnuZ2cdeRhO02iuK6FUUVM" crossorigin="anonymous"> |
🤖 Prompt for AI Agents
In @benchmarks/100.webapps/110.dynamic-html/bun/templates/template.html at line
6, The link tag in template.html uses an insecure HTTP Bootstrap CDN and an
outdated version; update the href on the <link> element to use HTTPS (https://)
and replace the v3.0.0 CDN URL with a current, supported Bootstrap release (for
example a Bootstrap 5 CDN URL) to eliminate mixed-content warnings and address
known vulnerabilities.
| var file = fs.createWriteStream(download_path); | ||
| request(url).pipe(file); | ||
| let promise = streamToPromise(file); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add error handling for HTTP request.
The request() call can fail due to network errors, invalid URLs, or HTTP errors, but there's no error handler attached to the request stream. This could lead to unhandled promise rejections.
🛡️ Proposed fix
var file = fs.createWriteStream(download_path);
- request(url).pipe(file);
+ const req = request(url);
+ req.on('error', (err) => {
+ file.destroy();
+ throw new Error(`Download failed: ${err.message}`);
+ });
+ req.pipe(file);
let promise = streamToPromise(file);
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| var file = fs.createWriteStream(download_path); | |
| request(url).pipe(file); | |
| let promise = streamToPromise(file); | |
| var file = fs.createWriteStream(download_path); | |
| const req = request(url); | |
| req.on('error', (err) => { | |
| file.destroy(); | |
| throw new Error(`Download failed: ${err.message}`); | |
| }); | |
| req.pipe(file); | |
| let promise = streamToPromise(file); |
🤖 Prompt for AI Agents
In @benchmarks/100.webapps/120.uploader/bun/function.js around lines 24 - 26,
The HTTP request stream created by request(url) is missing error handling which
can cause unhandled rejections; attach 'error' listeners to the request stream
and the file stream used by fs.createWriteStream(download_path) and ensure
streamToPromise(file) is rejected on those errors (e.g., add request.on('error',
...) and file.on('error', ...) and propagate/abort the other stream as needed);
also handle non-2xx HTTP responses from the request stream (check 'response' or
statusCode on the request) and reject the promise with a clear error so callers
of streamToPromise(file) get proper failure information.
| var keyName; | ||
| let upload = promise.then( | ||
| async () => { | ||
| [keyName, promise] = storage_handler.upload(bucket, path.join(output_prefix, upload_key), download_path); | ||
| await promise; | ||
| } | ||
| ); | ||
| await upload; | ||
| return {bucket: bucket, url: url, key: keyName} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Initialize keyName or handle undefined case.
keyName is declared on line 27 but only assigned inside the promise callback on line 30. If an error occurs during the upload, keyName will be undefined when returned on line 35, which could cause issues downstream.
🔧 Proposed fix
Option 1: Initialize with a default value:
- var keyName;
+ var keyName = null;
let upload = promise.then(
async () => {
[keyName, promise] = storage_handler.upload(bucket, path.join(output_prefix, upload_key), download_path);
await promise;
}
);
await upload;
+ if (!keyName) {
+ throw new Error('Upload failed: key name not set');
+ }
return {bucket: bucket, url: url, key: keyName}Option 2: Restructure to avoid the issue:
- var keyName;
- let upload = promise.then(
- async () => {
- [keyName, promise] = storage_handler.upload(bucket, path.join(output_prefix, upload_key), download_path);
- await promise;
- }
- );
- await upload;
- return {bucket: bucket, url: url, key: keyName}
+ await promise;
+ const [keyName, uploadPromise] = storage_handler.upload(bucket, path.join(output_prefix, upload_key), download_path);
+ await uploadPromise;
+ return {bucket: bucket, url: url, key: keyName}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| var keyName; | |
| let upload = promise.then( | |
| async () => { | |
| [keyName, promise] = storage_handler.upload(bucket, path.join(output_prefix, upload_key), download_path); | |
| await promise; | |
| } | |
| ); | |
| await upload; | |
| return {bucket: bucket, url: url, key: keyName} | |
| var keyName = null; | |
| let upload = promise.then( | |
| async () => { | |
| [keyName, promise] = storage_handler.upload(bucket, path.join(output_prefix, upload_key), download_path); | |
| await promise; | |
| } | |
| ); | |
| await upload; | |
| if (!keyName) { | |
| throw new Error('Upload failed: key name not set'); | |
| } | |
| return {bucket: bucket, url: url, key: keyName} |
| var keyName; | |
| let upload = promise.then( | |
| async () => { | |
| [keyName, promise] = storage_handler.upload(bucket, path.join(output_prefix, upload_key), download_path); | |
| await promise; | |
| } | |
| ); | |
| await upload; | |
| return {bucket: bucket, url: url, key: keyName} | |
| await promise; | |
| const [keyName, uploadPromise] = storage_handler.upload(bucket, path.join(output_prefix, upload_key), download_path); | |
| await uploadPromise; | |
| return {bucket: bucket, url: url, key: keyName} |
🤖 Prompt for AI Agents
In @benchmarks/100.webapps/120.uploader/bun/function.js around lines 27 - 35,
keyName is declared but only set inside the async promise callback (via
storage_handler.upload), so if the upload fails you return undefined; initialize
keyName to a safe default (e.g., null or empty string) or restructure to capture
the upload result directly: call const [resultKey, uploadPromise] =
storage_handler.upload(bucket, path.join(output_prefix, upload_key),
download_path); await uploadPromise; then set keyName = resultKey (or wrap in
try/catch to handle errors and set a fallback) before returning {bucket, url,
key: keyName}; ensure references to storage_handler.upload, keyName,
upload/promise are updated accordingly.
| "author": "", | ||
| "license": "", | ||
| "dependencies": { | ||
| "request": "^2.88.0" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🌐 Web query:
npm request package deprecation status 2020
💡 Result:
Short answer: The npm package request was officially deprecated in February 2020 (deprecation message added on Feb 11, 2020). See the package pages and the maintainers' deprecation discussion (GitHub issue #3142). [1][2][3]
Sources
- request package on npm (deprecation message). [1]
- GitHub issue "Request’s Past, Present and Future" (issue #3142 discussing deprecation/maintenance). [2]
- Coverage/reporting of the deprecation (Feb 2020). [3]
🏁 Script executed:
# Check the context of this package.json and see how request is used
find benchmarks/100.webapps/120.uploader/bun -type f -name "*.js" -o -name "*.ts" | head -20Repository: spcl/serverless-benchmarks
Length of output: 121
🏁 Script executed:
# Search for usage of request package in the uploader code
rg "require.*request|import.*request" benchmarks/100.webapps/120.uploader/bun/ 2>/dev/null || echo "No direct request imports found"Repository: spcl/serverless-benchmarks
Length of output: 157
🏁 Script executed:
# Check the full package.json to understand the context
cat benchmarks/100.webapps/120.uploader/bun/package.jsonRepository: spcl/serverless-benchmarks
Length of output: 212
🏁 Script executed:
cat benchmarks/100.webapps/120.uploader/bun/function.jsRepository: spcl/serverless-benchmarks
Length of output: 1057
Replace deprecated request package with native fetch API.
The request package was officially deprecated in February 2020 and is no longer maintained, creating security risks from unpatched vulnerabilities. The code uses it simply to download and pipe a file: request(url).pipe(file), which is easily replaceable with Bun's native fetch API.
♻️ Replace with Bun's native fetch
const fs = require('fs'),
path = require('path'),
- request = require('request'),
storage = require('./storage');Then update the download:
var file = fs.createWriteStream(download_path);
- request(url).pipe(file);
+ const response = await fetch(url);
+ response.body.pipe(file);Remove "request": "^2.88.0" from package.json dependencies.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In @benchmarks/100.webapps/120.uploader/bun/package.json at line 8, Remove the
deprecated "request" dependency from package.json and replace any usage of
request(url).pipe(file) with Bun's native fetch streaming: perform fetch(url),
check response.ok, then stream the response body to a writable file stream via
response.body.pipeTo(writableStream) (or use Bun.file/Bun.write APIs to create
the writable stream). Update any code that imports or references the "request"
module to use this fetch-based download flow instead.
| bucket = event.bucket.bucket | ||
| input_prefix = event.bucket.input | ||
| output_prefix = event.bucket.output | ||
| let key = event.object.key | ||
| width = event.object.width | ||
| height = event.object.height |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Missing variable declarations create implicit globals.
Variables bucket, input_prefix, output_prefix, width, and height are assigned without const, let, or var declarations. This creates implicit global variables, which can lead to:
- Unintended side effects and difficult-to-debug issues
- Pollution of the global scope
- Potential conflicts in concurrent execution contexts
🐛 Proposed fix
- bucket = event.bucket.bucket
- input_prefix = event.bucket.input
- output_prefix = event.bucket.output
+ const bucket = event.bucket.bucket;
+ const input_prefix = event.bucket.input;
+ const output_prefix = event.bucket.output;
let key = event.object.key
- width = event.object.width
- height = event.object.height
+ const width = event.object.width;
+ const height = event.object.height;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| bucket = event.bucket.bucket | |
| input_prefix = event.bucket.input | |
| output_prefix = event.bucket.output | |
| let key = event.object.key | |
| width = event.object.width | |
| height = event.object.height | |
| const bucket = event.bucket.bucket; | |
| const input_prefix = event.bucket.input; | |
| const output_prefix = event.bucket.output; | |
| let key = event.object.key | |
| const width = event.object.width; | |
| const height = event.object.height; |
🤖 Prompt for AI Agents
In @benchmarks/200.multimedia/210.thumbnailer/bun/function.js around lines 9 -
14, The assignments in the thumbnailer entry (the block that sets bucket =
event.bucket.bucket, input_prefix = event.bucket.input, output_prefix =
event.bucket.output, and width/height from event.object) are missing
declarations and create implicit globals; fix by declaring them with appropriate
keywords (e.g., const bucket = event.bucket.bucket, const input_prefix =
event.bucket.input, const output_prefix = event.bucket.output, and use const or
let for width and height depending on whether they are reassigned) in the same
location (the top of the function handling the event) so they are block-scoped
and do not pollute the global scope.
| @@ -0,0 +1,25 @@ | |||
| #!/bin/bash | |||
|
|
|||
| cd /mnt/function | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add error handling for cd command.
The script should verify that the directory change succeeds before proceeding, as a failed cd could cause subsequent commands to execute in an unexpected directory.
🐛 Proposed fix
-cd /mnt/function
+cd /mnt/function || exit 1📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| cd /mnt/function | |
| cd /mnt/function || exit 1 |
🧰 Tools
🪛 Shellcheck (0.11.0)
[warning] 3-3: Use 'cd ... || exit' or 'cd ... || return' in case cd fails.
(SC2164)
🤖 Prompt for AI Agents
In @dockerfiles/bun_installer.sh at line 3, The script currently runs the
command `cd /mnt/function` without checking its result; update the script to
verify the directory change succeeded before continuing by testing the exit
status of the `cd /mnt/function` command and, on failure, print a clear error
message and exit with a non-zero status so subsequent commands do not run in the
wrong directory.
| chmod +x bun | ||
| rm bun.zip | ||
|
|
||
| rm -r bun.lock |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use rm -f to avoid errors when bun.lock doesn't exist.
The command rm -r bun.lock will fail if the file doesn't exist. Using -f makes the operation safer.
🐛 Proposed fix
-rm -r bun.lock
+rm -rf bun.lock
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| rm -r bun.lock | |
| rm -rf bun.lock |
🤖 Prompt for AI Agents
In @dockerfiles/bun_installer.sh at line 22, The script currently runs the
command "rm -r bun.lock" which will error if bun.lock is absent; update that
invocation to use the forced remove flag instead (use -f rather than -r) so
removing bun.lock is safe when the file doesn't exist.
|
|
||
| rm -r bun.lock | ||
| # moves to correct directory on AWS if needed | ||
| echo -e '#!/bin/bash\nif $LAMBDA_TASK_ROOT; then cd $LAMBDA_TASK_ROOT; fi\n./bun --bun runtime.js' > bootstrap |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix critical shell syntax error in bootstrap script.
The condition if $LAMBDA_TASK_ROOT attempts to execute the value of LAMBDA_TASK_ROOT as a command, which will fail. The correct syntax should check if the variable is non-empty.
For example, if LAMBDA_TASK_ROOT=/var/task, the script tries to run /var/task as a command, resulting in an error like: /var/task: Permission denied or /var/task: Is a directory.
🐛 Proposed fix
-echo -e '#!/bin/bash\nif $LAMBDA_TASK_ROOT; then cd $LAMBDA_TASK_ROOT; fi\n./bun --bun runtime.js' > bootstrap
+echo -e '#!/bin/bash\nif [ -n "$LAMBDA_TASK_ROOT" ]; then cd "$LAMBDA_TASK_ROOT"; fi\n./bun --bun runtime.js' > bootstrap
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| echo -e '#!/bin/bash\nif $LAMBDA_TASK_ROOT; then cd $LAMBDA_TASK_ROOT; fi\n./bun --bun runtime.js' > bootstrap | |
| echo -e '#!/bin/bash\nif [ -n "$LAMBDA_TASK_ROOT" ]; then cd "$LAMBDA_TASK_ROOT"; fi\n./bun --bun runtime.js' > bootstrap |
🤖 Prompt for AI Agents
In @dockerfiles/bun_installer.sh at line 24, The bootstrap script writes a
faulty shell conditional that tries to execute $LAMBDA_TASK_ROOT; update the
echoed script so the if checks the variable correctly (e.g., use [ -n
"$LAMBDA_TASK_ROOT" ] or [ -d "$LAMBDA_TASK_ROOT" ]), and ensure the conditional
uses proper shell syntax with then/fi around cd $LAMBDA_TASK_ROOT; keep the rest
of the bootstrap invocation (./bun --bun runtime.js) unchanged and ensure quotes
are preserved when echoing the script.
| cd /sebs | ||
| bun /sebs/server.js "$@" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Add error handling for directory change.
The cd command should include error handling to prevent executing the server in an unexpected directory if /sebs doesn't exist or is inaccessible.
🛡️ Proposed fix
-cd /sebs
-bun /sebs/server.js "$@"
+cd /sebs || exit 1
+bun server.js "$@"
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| cd /sebs | |
| bun /sebs/server.js "$@" | |
| cd /sebs || exit 1 | |
| bun server.js "$@" |
🧰 Tools
🪛 Shellcheck (0.11.0)
[warning] 3-3: Use 'cd ... || exit' or 'cd ... || return' in case cd fails.
(SC2164)
🤖 Prompt for AI Agents
In @dockerfiles/local/bun/run_server.sh around lines 3 - 4, The script
run_server.sh currently runs "cd /sebs" without checking for failure, risking
starting bun /sebs/server.js in the wrong directory; update the script to check
the exit status of the cd command (or use a conditional like "cd /sebs || exit
1") before running bun /sebs/server.js "$@" so the script aborts with a non-zero
exit code if changing to /sebs fails and does not execute the server in an
unexpected location.
| app.listen(port=process.argv[2], function () { | ||
| console.log(`Server listening on port ${process.argv[2]}.`); | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Add validation for the port argument.
The server reads the port from process.argv[2] without validation. If the argument is missing or invalid, the server may crash or bind to an unexpected port.
🛡️ Proposed validation
+const port = parseInt(process.argv[2], 10);
+if (!port || port < 1 || port > 65535) {
+ console.error('Error: Valid port number required as first argument');
+ process.exit(1);
+}
+
-app.listen(port=process.argv[2], function () {
- console.log(`Server listening on port ${process.argv[2]}.`);
+app.listen(port, function () {
+ console.log(`Server listening on port ${port}.`);
});
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| app.listen(port=process.argv[2], function () { | |
| console.log(`Server listening on port ${process.argv[2]}.`); | |
| }); | |
| const port = parseInt(process.argv[2], 10); | |
| if (!port || port < 1 || port > 65535) { | |
| console.error('Error: Valid port number required as first argument'); | |
| process.exit(1); | |
| } | |
| app.listen(port, function () { | |
| console.log(`Server listening on port ${port}.`); | |
| }); |
🤖 Prompt for AI Agents
In @dockerfiles/local/bun/server.js around lines 44 - 46, The code is using
process.argv[2] directly in app.listen (app.listen(port=process.argv[2], ...))
without validation; parse process.argv[2] into an integer (e.g., const port =
parseInt(process.argv[2], 10)), verify it's a finite number within the valid
port range (1–65535), and if invalid either exit with a clear error via
console.error/process.exit(1) or fall back to a safe default port; then pass the
validated port variable to app.listen and use that same variable in the
console.log.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
config/systems.json (1)
177-180: Consider using AWS SDK v3 instead of deprecated v2.
aws-sdkversion 2.x entered maintenance mode and will reach end-of-support on September 8, 2025. AWS officially recommends migrating to the modular AWS SDK v3 (@aws-sdk/*). For a new runtime, starting with v3 avoids future migration effort and ensures access to new service features.Example replacement:
"packages": { "uuid": "3.4.0", "@aws-sdk/client-s3": "3.x.x" }This requires updating
storage.jsto use the v3 API, which has different import and usage patterns.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
config/systems.json
🔇 Additional comments (2)
config/systems.json (2)
157-182: AWS Bun configuration structure looks good.The configuration correctly includes:
runtime.jsin deployment files, which is necessary for the AWS Lambda custom runtime- Both x64 and arm64 architecture support
- Appropriate package set for AWS environment
The structure mirrors the existing nodejs pattern appropriately.
68-95: Configuration uses valid package versions.Verification confirms that both
oven/bun:1.3andoven/bun:1.2exist on Docker Hub, and Express 5.2.1 is available on npm. The Bun runtime configuration is properly configured with supported image versions and dependencies.
The absolute minimum required to add bun as a runtime to AWS compatible with both zip-based and container-based deployment. This means only the test 110 was copied over to bun and only the very necessary changes in every file were made to enable bun.
All the benchmark and handler functions are the same as for NodeJS, while there are sometimes additional files like the
runtime.jsfile required for AWS.One key difference for the Dockerfile.build is that the
gosubinary is copied from another Docker image (it's a completely static binary without any need for dynamically linked libs). This one one hand opens up the possibility to use more bare-bones/lightweight docker images without package managers as well as fixates the gosu version instead of relying on theapt update && apt installcombination. That is potentially something that can be done for the other Dockerfile.build files as well.Further PRs for Bun (GCP/Azure) are based on this one.
Summary by CodeRabbit
New Features
Chores
✏️ Tip: You can customize this high-level summary in your review settings.