Skip to content

Using temp files in new api is not the right approach #495

@devconcept

Description

@devconcept

The new v2 api makes some breaking changes to the way the old api used to work.
One of the most notable changes are the removal of the storage engines and the exposure of a stream to be manipulated directly.

Although this seem like and advance in my opinion this is actually a step back from the flexibility the v1 offers (at least in the way is drafted now).

I found some problems in how the files are handled that starts with the use of the fs-temp module as an intermediary to handle files.

Here is a small summary of the side effects of this decision.

  • If I want to store a file in a given folder now I have to manually write all the code required for this because I have no way to configure it. This was trivial in v1. The extra work required seems a lot compared to just writing a string with a path.

  • I now have an extra file in my filesystem. This is bad for several reasons. Before, I took the file and wrote directly from the request to the destination, now I write them in a temporary folder and then move it to
    the destination potentially requiring a lot more effort in storage space and processing than before. It is also not intuitive because is happening under the hoods without the user knowing that he should probably check the default os.tmpdir() folder for space.

  • The temporary files will be left in the filesystem. A lot of developers doesn't know how to properly deal with streams or files; even when they do, they probably forget to delete the copy after piping the file. This only adds to the extra work required to simply "store a file".

  • If I don’t want to use the filesystem as my storage I still need to wait for all the I/O to finish before I can store my files elsewhere (the cloud, a database, etc.). Imagine this for a lot of large files.

Summary, this change consumes a lot of resources just to be able to tell you the file size and the mime type. Maybe I'm missing something here but there might be another way to deal with this that does not imply stripping the api of features.

I think that "storing a file" should not be more difficult than just create the stream, handling the success and the error and calling pipe.

Maybe a way to tackle this problem is to accept a writable stream per file and pipe them automatically or expose busboy streams for consumption directly but definitely writing an extra temporary file inside a fixed folder will cause more problems than solutions.

I wish the new api looks more like this

const upload = multer('/uploads');

or maybe this

const upload = multer({
    stream: (req, file) => {
        return writableStream();
    }
});

or both, but the point is that I still should be able to store my files easily and efficiently anywhere I want.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions