-
Notifications
You must be signed in to change notification settings - Fork 71
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Transformstream in read-write stream pipe #405
Comments
There are two sides on this issue. In the first one you are right, there is no transform process between the multer stream and the gridfs stream and the only way to have it right now is to implement a storage engine yourself that consumes this one under the hood and implements the On the other hand, your particular example of encrypting (or compressing, or any other operation) using transform streams is a really bad idea because it does not scale well specially if you deal with large files. The recommended approach for large systems is to store the file first and encrypt/compress using a background task. I must admit that I'm torn by this issue because in the end If there is a scenario where this mechanism is valid (and I'm sure there are some) then it should be available. I'll think about your use case and answer you in a couple of days. |
Cool, thanks.
The performance issue you pointed out probably is an important issue of which I have no experience, yet. Would be bad if that unnecessarily increased the time an upload would take or maybe it would just eat up CPU, so it would be an architectural problem. But still in a large scale system it might be better to have some dedicated workers doing that job.
If desired I could provide a PR...
… On 17. Nov 2021, at 16:51, devconcept ***@***.***> wrote:
There are two sides on this issue.
In the first one you are right, there is no transform process between the multer stream and the gridfs stream and the only way to have it right now is to implement a storage engine yourself that consumes this one under the hood and implements the _handleFile method piping the incoming file through the transform streams before calling this module _handleFile with the results of the operation.
On the other hand, your particular example of encrypting (or compressing, or any other operation) using transform streams is a really bad idea because it does not scale well specially if you deal with large files. The recommended approach for large systems is to store the file first and encrypt/compress using a background task.
I must admit that I'm torn by this issue because in the end If there is a scenario where this mechanism is valid (and I'm sure there are some) then it should be available.
I'll think about your use case and answer you in a couple of days.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub <#405 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AACCYSBSU3OLZGI3QKXM7FTUMPFRHANCNFSM5IBHA2BQ>.
|
So you had some closing thoughts on the issue? |
I'll probably implement it but not in the way you provided in the example. I'm strugling with the compatibility of some modules that are moving to es6 and trying to preserve the backwards compatibility of the module. This is what is delaying the next release. |
Excellent, thanks! Yeah that example was not the best solution around... |
Is your feature request related to a problem? Please describe.
Although already discussed and closed in #5 I believe it might be worth another thought: eg if you had the requirement to encrypt data, but do not have the enterprise version, it would be quite fancy to do that with a transformstream, eg as described here: http://codewinds.com/blog/2013-08-20-nodejs-transform-streams.html
Describe the solution you'd like
in
fromMulterStream
enhance this linepump([readStream, writeStream])
to:Describe alternatives you've considered
Alternative would be as described in #5 to re-read the data transform, it and write it again and then delete the old data.
Additional context
If you think that's worth thinking of I could provide a PR.
The text was updated successfully, but these errors were encountered: