-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Initial version to make it run on Podman #69
base: master
Are you sure you want to change the base?
Initial version to make it run on Podman #69
Conversation
Build succeeded. ✔️ ara-role-api-distributed-sqlite SUCCESS in 14m 13s |
Build succeeded. ✔️ ara-role-api-distributed-sqlite SUCCESS in 19m 46s |
Build failed. ✔️ ara-role-api-distributed-sqlite SUCCESS in 14m 34s |
Build failed. ✔️ ara-role-api-distributed-sqlite SUCCESS in 14m 38s |
description: | | ||
Desploys the ARA API server on Fedora 36 as well as CentOS Stream 8/9 | ||
in a Podman container and tests it using the default sqlite database backend. | ||
run: tests/with_podman.yaml |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool! We should bump that to Fedora 38 like I did recently for ara but I can take care of that in another PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's split that, first let's make this work ^^
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like that there is already a job for testing podman and that it is even passing, nice work :)
--- | ||
- name: Detecting existing PyPI installation | ||
ansible.builtin.stat: | ||
path: "{{ ara_api_venv_path }}" | ||
register: existing_pypi_install | ||
|
||
- name: Remove PyPI virtualenv | ||
ansible.builtin.file: | ||
path: "{{ ara_api_venv_path }}" | ||
state: absent | ||
when: existing_pypi_install['stat']['exists'] | ||
|
||
- name: Detecting existing sqlite database | ||
ansible.builtin.stat: | ||
path: "{{ ara_api_database_name }}" | ||
register: existing_sqlite_database | ||
|
||
- name: Move sqlite database to new location | ||
ansible.builtin.command: mv {{ ara_api_database_name }} {{ ara_api_root_dir }}/ansible.sqlite | ||
when: | ||
- existing_sqlite_database['stat']['exists'] | ||
- ara_api_database_engine == 'django.db.backends.sqlite3' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I understand the intent here but I'm not sure whether we should do this automatically.
The collection doesn't (yet) do anything special when switching between installation types or database backends so it feels a bit out of place to go out and delete venvs and move files around.
If the concern is that there might already be an existing installation, maybe we could print a helpful debug message about it and fail the playbook such that the user is given the opportunity to clean it up or automate it outside the collection.
We can always reconsider if it turns out to be a big issue or something we really want to support.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I put it behind a flag for now :-) and I added notice in the script to notify the user.
The reason I put it there is that it's little extra effort to clean up the PyPI installation (which in my case broke because I upgraded to Debian 12 which doesn't take kindly to pip install
any longer)
- name: Ensure ARA API container | ||
containers.podman.podman_container: | ||
name: ara-api | ||
image: recordsansible/ara-api:{{ ara_api_version }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ara_api_version defaults to master
, right ? That tag doesn't exist.
I'm not convinced we need an extra variable for the container image tag but it's not the end of the world.
We could also substitute master
for latest
, perhaps.
What do you think ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, I completely missed that, as I have mine configured to use 'latest', I have updated the defaults, because if the tag does not exist, it makes no sense to have it as a default :)
name: ara-api | ||
image: recordsansible/ara-api:{{ ara_api_version }} | ||
state: present | ||
auto_remove: true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should it really be "auto_remove" ? If it deletes itself on exit, then the systemd service wouldn't really have the opportunity to stop or restart it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I put it there to make the podman generate systemd
command below idempotent with what is created :)
The systemd service knows how to handle it, but it needs an existing container to template from
Hello and thanks for the PR @Thulium-Drake ❤️ I have some questions and comments in line but this is good work. I'm looking forward to merge it when it's ready :) |
Build failed. ✔️ ara-role-api-distributed-sqlite SUCCESS in 13m 15s |
Build failed. ✔️ ara-role-api-distributed-sqlite SUCCESS in 15m 34s |
So I ran into 2 issues deploying it on a fresh system (I think I forgot to thoroughly clean my dev system the other days :-) and I didn't test RHEL yet):
|
Build failed. ❌ ara-role-api-distributed-sqlite FAILURE in 12m 22s |
Build failed. ❌ ara-role-api-distributed-sqlite FAILURE in 12m 08s |
da2d609
to
5fd69b9
Compare
Build failed. ❌ ara-role-api-distributed-sqlite FAILURE in 11m 32s |
Build failed. ❌ ara-role-api-distributed-sqlite FAILURE in 11m 42s |
recheck |
@dmsimard I think CI needs some love, all tests seem to fail, even those this PR didn't touch 😅 Do I need to rebase and try again? Or do we need to work on that first (and if so, what needs fixing?) |
Build failed. ❌ ara-role-api-distributed-sqlite FAILURE in 11m 37s |
I need some time but I will look into it. Sorry about the delay. |
Build failed. ❌ ara-role-api-distributed-sqlite FAILURE in 11m 57s |
I am still interested in this. I will circle back to the PR once there is a bugfix release out. Thanks for your patience. |
91ae432
to
e3ebf47
Compare
@dmsimard Rebased! :-) |
recheck |
… systems, so no further handling is required
…the current config
…out existing pypi installation
…ations on local registries etc.)
f189ded
to
81d5f58
Compare
Build succeeded. ✔️ ara-role-api-distributed-sqlite SUCCESS in 5m 45s |
…rect vars in the play
Build failed. ❌ ara-role-api-distributed-sqlite NODE_FAILURE Node request 200-0007632258 failed in 0s |
recheck |
Build failed. ✔️ ara-role-api-distributed-sqlite SUCCESS in 6m 23s |
So there seems to be an issue with the podman test, it uses an older release of the However, I think that parameter is really important to have, as leaving it default might cause end users to run with outdated images.. Is there any way to set that requirement? :-) |
This is in the version of Ansible that Zuul runs, yeah ? |
Yea, I check the collections repo, it was released in 1.14.0 (added with containers/ansible-podman-collections@2f5da14) |
According to the Zuul docs there is a job parameter that we can set to select the version of Ansible that the job should use: https://zuul-ci.org/docs/zuul/latest/config/job.html#attr-job.ansible-version I don't know how to tell what versions of Ansible are available, though. We can try bumping it and see if it works. When we needed to be strict about the version of Ansible that the job ran in the past, we'd use Zuul to install a specific version of Ansible in a virtualenv and then use that (nested) Ansible. This is what we do for the ara source repo. I try to avoid doing that if we can since it adds a layer of complexity and "hides" the tasks from Zuul since the task essentially becomes an |
I checked the release notes for Ansible 9, assuming it's using the latest version, we should be good. Also, Ansible 8's been deprecated a while now :-) |
I see We can try bumping that up to 9 and see if that works. |
Zuul encountered a syntax error while parsing its extra keys not allowed @ data['ansible_version'] The problem appears in the the "ara-role-api-podman" job stanza: job: in "ansible-community/ara-collection/.zuul.d/jobs.yaml@master", line 101 |
Build failed. ✔️ ara-role-api-distributed-sqlite SUCCESS in 5m 50s |
It's failing here now:
I suppose it should be fully qualified? like docker.io/recordsansible/ara-api or quay.io/recordsansible/ara-api. |
Almost there, maybe, it's now failing here:
|
Build failed. ✔️ ara-role-api-distributed-sqlite SUCCESS in 6m 04s |
ara_api_log_dir: "/opt/ara/logs" | ||
ara_api_settings: "{{ ara_api_root_dir }}/settings.yaml" | ||
|
||
- name: Ensure ARA API container |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is kind of a gotcha here.
I think it is succeeding in starting the container when running from an unprivileged user (since podman can start rootless no problem) but then it tries to set up the systemd unit for it and fails due to lack of privileges.
If we set become: true
on the task that starts the container, I would presume that it'd start it as root and not as the unprivileged user. It would work, but wouldn't be the same thing. If we go that route it would be worthy of being mentioned as a caveat since the pip/source installs supports running as unprivileged.
Note that there would also be a failure on the following task (service) since it doesn't have become: true
.
Some progress done but I am out of time for now, let me know what you think. |
Still a work in progress, but it should migrate existing PyPI installation to podman while keeping the data :-)