-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
App Skeleton #3
Comments
Hey @LukeMarlin , what is the language/tool you're choosing for the app and API? |
Python with FastAPI, as discussed in other issues. When it comes to the framework itself, I'm more experienced with Flask, but FastAPI has a lot of nice things (like type checking, automatic "console" to explore the API) so I wanted to use it. I don't think we're going to have very specific development needs anyway, so I'm pretty sure any (light) framework will do just fine. When it comes to project/packaging management, I'm going to use Poetry, and for testing management I'd go with Tox. These are tools that I'm using quite a lot at work and they've proved very nice so far. As said before, if anyone has strong feelings against a tool, let's discuss this in this issue! |
Two of the most active developers don't either like Python or have much experience with it (myself making it three). I'm curious how active @DerOetzi and @simonwiles will ultimately be to help (please chime in). What would objections be to something like golang? Lastly, I will say that FastAPI looks sweet and I'd be willing to learn more Python to get a feel for it. Just worry about maintainability for folks who are consistently active in the repos. |
I have more experience with Flask myself, but trying to switch to FastAPI with another project. For this I have the time to give some discussion input and review PRs, but I don't have time to develop the service by myself. |
@NorseGaud Valid points, I mentioned people from the thread indeed, as it is a sidecar, so I hope your team won't be bothered too much with it. I don't think golang is unfit to do this, plenty of libs exist already for APIs, but what I like about python for such simple jobs is how the code will be understandable to anyone, even non-python devs. It also mean that when it comes to small bugfixes, mostly anyone could do it. Obviously the choice is up to you guys, I however don't intend to dive more into golang than I have (which isn't much), as I'm already trying to learn Rust on the side! If you wish, you can even decide once you've seen the skeleton + first route: if it looks too complex to maintain in case external people bail out, feel free to choose another way, it would be understandable ;) |
For sure, please proceed. As long as it works, I'm ok with it :) |
I'm very interested in this. Did someone already start building? And is there anyone that is taking the lead in this project? I saw that there was still no decision made between Flask and FastAPI. I would be interested in contributing to a FastAPI application. |
@thehunt100, since there's no voice clearly against, I started with fastAPI. Going to push the draft skeleton soon. |
OK, great, is there anything you need help with? |
Opened #4, which shows how I'd organize the app, feel free to comment about python stuff there! My current idea, that I'll try tomorrow, would be to build this Dockerfile on top of docker-mailserver's so that all scripts are loaded and necessary packages installed. Then provide a docker-compose that gives access to |
I think this is a great idea. What we can do is PR in docker-mailserver to split the dockerfile into multiple layers so that you only build from what you need to execute the various inner-scripts. |
@LukeMarlin I struggled with accessing the config files and scripts in my current solution, and I came up with some unconventional solutions. In my situation, it was important to separate the admin part from the server part since I don't maintain the docker mailserver project and didn't want update conflicts. I didn't want to let the admin run with mailserver editing permissions and be directly accessible from the internet in case of a security problem. So the solution I came up with was to create a /config/run directory inside the docker volume. The admin now creates the commands to execute on the mailserver and saves them as a file in the /config/run dir so a cronjob or inotifywait powered script with the right permissions can execute them. This setup has a few added benefits. It is straightforward to create an audit log to see when and who executed commands. The admin can now run in its container with a low permission level since it only needs to access the /config/run for security and maintenance benefits. Also, it is now easy to test the admin code since you only need to check the outputted commands. It allows for extra validation on the server side, so the script that executes the commands can catch bugs or security problems. I realize that this strong decoupled setup also has some downsides. For example, there is no direct feedback to give to the user. I do think this is possible when you run the command executer at a high frequency or on the command creation event with something like inotifywait. Also, to get the server's current state, it needs to provide that in the run dir, which is possible by copying the config files after they have changed. For my use case, this was not a big problem. |
Still, the srcipt to change password relies on
So, you actually created |
I save the commands as normal text files. Then I let the execute script parse them and validate the commands. Only predefined (whitelisted) scripts and commands will be allowed. You can run the execute script inside the container. In that case, you could copy the script into the container, but you could also do it on the host system like the setup.sh is doing now. I realize that this is an unconventional setup. If you control the whole project, a tighter coupling might be the preferred way to go. |
@LukeMarlin, I had envisioned this as running alongside a fully configured docker-mailserver container. That's why I said splitting the layers up a bit more in the mailserver repo so the docker tag layers are already on the host and can be re-used for the admin container without a ton of extra disk space. How about the API runs on the same host as the mailserver and the UI can be placed anywhere the administrator wants? |
That gets my vote! |
If you mean running in the same container, I think that this might be the most efficient option. That way it's easy to use the scripts, and the API could still be optional (i.e: docker-mailserver:core & docker-mailser:apified). However from what I recall from the original thread, this wasn't popular among maintainers right? If you mean on the same host in another container, sure, having more layers will help when it comes to disk usage. But the API will still probably require much more RAM than it should. On the other hand, would it be a viable solution for API other scripts in the future? For account related stuff it's just about a file that is easily shared, but I don't know all the possibilities of setup.sh, maybe some functions need to run on the target container? |
Options we have available to us (not exhaustive):
Am I missing anything? |
Hey @LukeMarlin , I just updated the list of options available. Let me know if anything is missing. |
Yes, as hinted above, we could make this image derive from core; not meant as a sidecar, but as a replacement. Now, my opinion on the list: 1.i. yeah I'm not so sure about this, sounds scary and brittle (might be wrong, not a sysadmin on docker guru) |
Roger that, thanks for the reply to the points! I personally much prefer running this as a separate container, but we can always hash that out later once it's functional. I don't have any immediate problems with just creating and using an "apiified" tag to run. I worry more about scaling the API/UI for a production setup. Updated:
Side note -- The python version in the container is:
Are you planning on adding a requirements.txt for us to pip install with? |
I'm using poetry for requirements. The version of python used in the draft is 3.9, but we should be able to use whatever python3 version that is on the docker (it's probably 3.7+, try python3 --version) |
|
Just briefly chiming in as I saw this referenced from an issue I was removing a stale tag from today. I had a response typed out weeks or so ago on the original discussion thread that I never got around to finishing. I shared mostly the same opinion as other maintainers that such a feature should be a separate project/container, but I wasn't against the main projects docker builds including a minimal API binary (eg rust compiled via CI on a separate repo). That allows for anyone to build a separate container such as on alpine and add in their own public API or admin UI where any auth is implemented and TLS out of the server is taken care of. The main docker image would just be providing an minimal API to provide functionality like the bash scripts do, but imo more reasonable to interact with. I suppose layering the API on top of the main image as a base works too. I recall a discussion about manipulating files directly instead of using the bash script functionality, and having a concern about changes such as passwords (which the original discussion was focused on).. the API discussion was suggesting pre-hashing passwords which IIRC would not work well if we changed from SHA512-crypt to a different hash at another point (new releases of distributions are shifting to yescrypt I think). Seemed better to have the internal API service handle any state changes, which would be in sync with the bash scripts if it called those. A separate container proxies the API and can handle any rate limiting, TLS, auth etc. Since preferences may differ, eg for auth some may want OAuth, while others may be happy with mTLS, region locking, how logs/metrics are handled etc. All of that is a separate concern from the minimal API required for |
Update: I'm currently playing with a Dockerfile that inherit Anyway, I'm not very often available at the moment, it should be better mid-august, however I hope to have a build before that :) |
Might I suggest Caddy? Although personally I still advocate for a minimal internal API service that we can ship on the official image (if dependencies and size required is minimal, like Rust would enable) with a separate image for a public API and anything like nginx/caddy. Caddy has a very simple config and can handle features you may want such as automatic TLS provisioning with LetsEncrypt, one liner reverse proxy of a service, smart file type defaults for gzip, easy mTLS, etc. |
I get the feeling that we'll have to open a large discussion about that once we prove this works building on top. It seems like a lot of the team is split on this right now (maybe just because they haven't seen it yet). 🤞🏼 |
Will check, I said nginx but in reality I wanted to first check if apache was present in the image to avoid installing stuff. In any case swapping the proxy could be done at any time anyway! |
If you'd like to give Caddy a go, let me know if you'd like any help with it's config. When using within Docker I believe you'd want to listen to Since Caddy handles the TLS provisioning, it'd need to have access to perform a HTTP port 80 challenge or Wildcard DNS challenge (requires custom caddy builds with DNS plugins IIRC), it is possible to use an existing TLS cert too, but they likewise need to be assigned the SAN that the API responds to. I imagine you might run into similar concerns with nginx or apache as well, but I'm still adamant about Caddy being nicer to work with config wise. Again, if we had a separate internal API, most of those concerns can be delegated to a sidecar container, which proxies the internal API and perhaps provides a frontend web client for admin or whatever else you like. I assume some users would prefer to have nginx-proxy or traefik handle the frontend + API domains and TLS certs, which can be another use case to keep in mind. Or perhaps I've misunderstood the approach being taken? |
So far my intention was to have only one API. It should be a very simple one since it will mostly call |
I recall a discussion about the API being reachable from a frontend web client, presumably a REST API. There was talk of using an API key in the header request for auth, rate limiting and handling HTTPS. None of these things are required for the main API and are better delegated to a separate service imo that proxies the API to the public web if desired. Is that no longer the case? How is the API being exposed or interacted with? Is the frontend web admin separate from the API project? |
Maybe I don't understand what's your idea for an internal API? No https, no security, no rate limiting, so I suppose exposed only locally? What's its purpose?
As far as I'm concerned, yes. It could still be an option in the forked dockerfile, but for sure I don't think it should be embedded and mandatory along the API |
An API where none of the other features are relevant to it's functionality? It's quite common for services with Docker to only publish port 80 and defer HTTPS to a reverse proxy where a lot of those concerns are handled. Especially since the requirements and stack can vary for an environment.
Ok great 👍
I'm happy to just wait and see once it's ready, my only concern was about how flexible the setup would be for different setups. We seem to be on the same page, I was just considering the API and security as separate boundaries (it's important to have it, but ideally can be delegated to existing infrastructure that focuses on that). |
Current draft API is intended to be used directly, regardless of client. Could be a small cli, could be curl, could be a web panel or as suggested by someone else, a plugin in some webmail.
Didn't look into that. Ideally this should be configurable, for PoC it could be a chosen subdomain
Didn't think much of it yet either, it seems that caddy can take care of HTTPS which is nice. Other than that, I'd expect that a couple env values (token, domain) and a different docker image would suffice. Might be wrong though, I'll know soon enough when I reach that point. Technically I'm somewhat ready to test on my own setup. As I said above, I'm busy and will be able to provide more after mid-august hopefully! |
Could someone summarize - or point at - the decision that's been made so far for building out the container that will run this? I would like to add to the discussion the idea of setting the API up to run by default on HTTPS.
We could simplify and ignore 1/2 by just providing instructions to people on how to pull down and configure the linuxserver.io swag image https://docs.linuxserver.io/general/swag - which will get you an nginx with easy let's encrypt integration. For more details on 3 - I suggest you look at the latest openwrt which defaults to https but using a self signed certificate - they even have details on 'trusting' that cert https://openwrt.org/docs/guide-user/luci/getting_rid_of_luci_https_certificate_warnings As we are passing tokens / passwords around in the API we should make it secure by default. |
Option 2 is the current one: https://caddyserver.com/ |
I personally won't be able to use it without additional effort - but I'm up for that. I guess it's just a matter of waiting for some code to get shared before I can dive in. |
Related: #1
Goal: Generate the app skeleton which will eventually run the API
The text was updated successfully, but these errors were encountered: