Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

reverse proxy: fastcgi buffer requests for fastcgi by default #6759

Merged
merged 5 commits into from
Jan 2, 2025
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 17 additions & 3 deletions modules/caddyhttp/reverseproxy/reverseproxy.go
Original file line number Diff line number Diff line change
Expand Up @@ -243,6 +243,19 @@ func (h *Handler) Provision(ctx caddy.Context) error {
return fmt.Errorf("loading transport: %v", err)
}
h.Transport = mod.(http.RoundTripper)
// enable request buffering for fastcgi if not configured
// This is because most fastcgi servers are php-fpm that require the content length to be set to read the body, golang
// std has fastcgi implementation that doesn't need this value to process the body, but we can safely assume that's
// not used.
// http3 requests have a negative content length for GET and HEAD requests, if that header is not sent.
// see: https://github.com/caddyserver/caddy/issues/6678#issuecomment-2472224182
// Though it appears even if CONTENT_LENGTH is invalid, php-fpm can handle just fine if the body is empty (no Stdin records sent).
// php-fpm will hang if there is any data in the body though, https://github.com/caddyserver/caddy/issues/5420#issuecomment-2415943516

// TODO: better default buffering for fastcgi requests without content length, in theory a value of 1 should be enough, make it bigger anyway
if module, ok := h.Transport.(caddy.Module); ok && module.CaddyModule().ID.Name() == "fastcgi" && h.RequestBuffers == 0 {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't love that we implement this hack like this cause it means transport implementations don't have control over their buffering setup. Can we make an optional interface that transports can implement to override the default request buffer size? Default to zero, but implement that func in fastcgi transport for this. Would allow any transport plugin to fix this as well if necessary. (But... if we think this is just a temporary hack that will be reworked later anyway once we have full buffering implemented then this is fine?)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree this shouldn't be a long-term solution, and I think I like the implicit interface idea alright, so let's revisit that in another issue. But yeah, this should do for now.

h.RequestBuffers = 4096
}
}
if h.LoadBalancing != nil && h.LoadBalancing.SelectionPolicyRaw != nil {
mod, err := ctx.LoadModule(h.LoadBalancing, "SelectionPolicyRaw")
Expand Down Expand Up @@ -1216,13 +1229,14 @@ func (h Handler) bufferedBody(originalBody io.ReadCloser, limit int64) (io.ReadC
buf := bufPool.Get().(*bytes.Buffer)
buf.Reset()
if limit > 0 {
n, err := io.CopyN(buf, originalBody, limit)
if (err != nil && err != io.EOF) || n == limit {
var err error
written, err = io.CopyN(buf, originalBody, limit)
if (err != nil && err != io.EOF) || written == limit {
return bodyReadCloser{
Reader: io.MultiReader(buf, originalBody),
buf: buf,
body: originalBody,
}, n
}, written
}
} else {
written, _ = io.Copy(buf, originalBody)
Expand Down
Loading