-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Python] Add FastAPI-Granian tests #8541
base: master
Are you sure you want to change the base?
Conversation
Add fortune URL to the `granian` test.
Hey, @NateBrady23, I don’t think I have the authority to decide which tests should go. The current number of tests for FastAPI, including the proposed ones, is 13. Should I use my best judgment, or are there any guidelines on which to consider most valuable/suitable? Should I consider the last round results? For example, the I’m not sure how to determine further reduction. The Any tips? 😅 |
@Aeron I would ping the latest maintainers of the tests to see if they have a preference. You can remove the tests from the benchmark_config but leave the test implementations in case they want to add them back later. |
@Aeron I already discussed in several places why I think benchmarking single frameworks across several servers is not valuable considering benchmarks include those servers (see for ref #8055, #8420 (comment), emmett-framework/granian#62). This is especially true considering the composite scores might advertise misleading information (see #8420 (comment)). For instance, if you check Python composite scores for round 22, FastAPI is in the 2nd place, but:
And we can make the same considerations for Falcon (3rd position, but again, plaintext and JSON numbers come from the PyPy socketify runs). So, given the current context, and seeing no proposals to review how "composite scores" are evaluated (maybe we can launch a discussion about it @NateBrady23?) I don't see any valuable information added in increasing frameworks matrixes with different servers. Nevertheless, if the community has a different opinion on this, I suggest to switch the threading mode to runtime and use 2 threads, as you're leaving performance on the table with this configuration. |
@gi0baro Yeah, I did my homework and read it before, and I mostly agree. But for now, it is what it is. It doesn’t look like the community is eager to change the approach. Weirdly, but it seems like it’s not always the identical ∂R across frameworks. I certainly saw a similar argument in one of the discussions. In an ideal world, it should, yes. Arguably, it’s more important for “regular” users to see a very explicit comparison of how different frameworks behave on top of various servers, neck-to-neck. Maybe it’s not ideal and precise, flawed a bit, as you pointed out, yet valuable for many folks, even if it smells like marketing a little. It’s a single point of reference, you know. On which anyone interested enough can build a more educated comparison and analysis. So, I think it’s worth the shot. For now, at least. At this point, it’s not even a fact I’ll be able to contact all the original maintainers of things I think should go. We’ll see.
Sure, I’ll look into it. Thank you for pointing it out. |
FastAPI already has tests for platforms like Uvicorn and Hypercorn, yet there are no tests with Granian, which is already present among tests by itself. Although proposed tests still use ASGI and not RSGI (to potentially draw an even more prominent contrast), I believe it is still a valuable option for comparison, which can produce a significant enough difference in performance to consider.
For example, running the fortune and query type tests on my machine (M1 Max 64 GB) produces the following:
fastapi
fastapi-uvicorn
fastapi-granian
The tests are 100% the same as for other flavors; this PR only adds proper container files and extends the benchmark configuration. Python version of the base container image is also the same, which is 3.11.
/cc @gi0baro