/rss20.xml">

Fedora People

Introducing Fedora Project Leader Jef Spaleta

Posted by Fedora Magazine on 2025-04-02 14:00:00 UTC

Hello everyone! Current Fedora Project Leader Matthew Miller here, with some exciting news!

A little while ago, I announced that it’s time for a change of hats. I’m going to be moving on to new things (still close to Fedora, of course). Today, I’m happy to announce that we’ve selected my successor: long-time Fedora friend Jef Spaleta.

Some of you may remember Jef’s passionate voice in the early Fedora community. He got involved all the way back in the days of fedora.us, before Red Hat got involved. Jef served on the Fedora Board from July 2007 through the end of 2008. This was the critical time after Fedora Extras and Fedora Core merged into one Fedora Linux where, with the launch of the “Features” process, Fedora became a truly community-led project.

Of course, things have changed a little around here since then. The Council replaced the Board, the Features process has changed (to “Changes“, of course), and … a few other things. Jef has been busy with various day jobs, but has always kept up with Fedora. I’m glad we’re now able to let him give his full attention to the next years of Fedora success.

Jef starts full-time at Red Hat in May. Then, after a few weeks for orientation, I’ll officially pass the torch at Flock in the beginning of June. Please join me in welcoming him back into the thick of things in Fedora-land in the Fedora Discussion thread for this post.

Speaking of Flock (our annual contributor conference)… we’re getting the final schedule lined up! We have an excellent slate of talks and speakers. Perhaps even more importantly, we have some of the best Fedora swag ever made. If you can, join us from June 5–8. Find more information, including registration links, on the Flock website. Prague is a great city, and particularly lovely in June, so if you’ve been looking for an excuse to visit, this is it!

Oh, and one more thing… if you’re really into curling, Jef will be very happy to talk to you about it!

Enhancing Your Python Workflow with UV on Fedora

Posted by Fedora Magazine on 2025-04-02 08:00:00 UTC

This article is a tutorial on using UV to enhance or improve your Python work flow.

If you work with Python you most likely have used one or all of the following tools:

  • Pip to install packages or pipx to install them on virtual environments.
  • Anaconda to install packages, custom Python versions and manage dependencies
  • Poetry (and pipx), to manage your Python project and packaging.

Why do you need another tool to manage your Python packaging or install your favorite Python tools? For me, using uv was a decision based on the following features:

  1. Simplicity: uv can handle all the tasks for packaging or installing tools with a very easy-to-use CLI.
  2. Improved dependency management: When there are conflicts, the tool does a great job explaining what went wrong.
  3. Speed: If you ever used Anaconda to install multiple dependencies like PyTorch, Ansible, Pandas, etc. you will appreciate how fast uv can do this.
  4. Easy to install: No third-party dependencies to install, comes with batteries included (this is demonstrated in the next section).
  5. Documentation: Yes, the online documentation is easy to follow and clear. No need to have a master degree in the occult to learn how to use the tool.

Now let’s be clear from the beginning, there is no one-size-fits-all tool that fixes all the issues with Python workflows. Here, I will try to show you why it may make sense for you to try uv and switch.

You will need a few things to follow this tutorial:

  • A Linux installation: I use Fedora but any other distribution will work pretty much the same.
  • An Internet connection, to download uv from their website.
  • Be familiar with pip and virtual environments: This is optional but it helps if you have installed a Python package before.
  • Python programming experience: We will not code much here, but knowing about Python modules and how to package a project using pyproject.toml with frameworks like setuptools will make it easier to follow.
  • Optionally, elevated privileges (SUDO), if you want to install binaries system-wide (like RPMS).

Let’s start by installing uv, if you haven’t done so already.

Installing UV

If you have a Linux installation you can install uv like this:

# The installer has options and an unattended installation mode, won't cover that here
curl -LsSf https://astral.sh/uv/install.sh | sh

Using an RPM? Fedora lists several packages since version 40. So there you can do something like this:

# Fedora RPM is slightly behind the latest version but it does the job
sudo dnf install -y uv

Or make yourself an RPM using the statically compiled binaries from Astral and a little help from Podman and fpm:

josevnz@dmaf5 docs]$ podman run --mount type=bind,src=$HOME/tmp,target=/mnt/result --rm --privileged --interactive --tty fedora:37 bash
[root@a9e9dc561788 /]# gem install --user-install fpm
...
[root@a9e9dc561788 /]# curl --location --fail --remote-name https://github.com/astral-sh/uv/releases/download/0.6.9/uv-x86_64-unknown-linux-gnu.tar.gz
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100 15.8M  100 15.8M    0     0  8871k      0  0:00:01  0:00:01 --:--:-- 11.1M
[root@a9e9dc561788 /]# fpm -t rpm -s tar --name uv --rpm-autoreq --rpm-os linux --rpm-summary 'An extremely fast Python package and project manager, written in Rust.' --license 'Apache 2.0' --version v0.6.9 --depends bash --maintainer 'Jose Vicente Nunez <kodegeek.com@protonmail.com>' --url https://github.com/astral-sh/uv  uv-x86_64-unknown-linux-gnu.tar.gz
Created package {:path=>"uv-v0.6.9-1.x86_64.rpm"}
mv uv-v0.6.9-1.x86_64.rpm /mnt/result/
# exit the container
exit

You can then install it on /usr/local, using --prefix:

sudo -i
[root@a9e9dc561788 /]# rpm --force --prefix /usr/local -ihv /mnt/result/uv-v0.6.9-1.x86_64.rpm 
Verifying...                          ################################# [100%]
Preparing...                          ################################# [100%]
Updating / installing...
   1:uv-v0.6.9-1                      ################################# [100%]
[root@a9e9dc561788 /]# rpm -qil uv-v0.6.9-1
Name        : uv
Version     : v0.6.9
Release     : 1
Architecture: x86_64
Install Date: Sat Mar 22 23:32:49 2025
Group       : default
Size        : 40524181
License     : Apache 2.0
Signature   : (none)
Source RPM  : uv-v0.6.9-1.src.rpm
Build Date  : Sat Mar 22 23:28:48 2025
Build Host  : a9e9dc561788
Relocations : / 
Packager    : Jose Vicente Nunez <kodegeek.com@protonmail.com>
Vendor      : none
URL         : https://github.com/astral-sh/uv
Summary     : An extremely fast Python package and project manager, written in Rust.
Description :
no description given
/usr/local/usr/lib/.build-id
/usr/local/usr/lib/.build-id/a1
/usr/local/usr/lib/.build-id/a1/8ee308344b9bd07a1e3bb79a26cbb47ca1b8e0
/usr/local/usr/lib/.build-id/e9
/usr/local/usr/lib/.build-id/e9/4f273a318a0946893ee81326603b746f4ffee1
/usr/local/uv-x86_64-unknown-linux-gnu/uv
/usr/local/uv-x86_64-unknown-linux-gnu/uvx

Again, you have several choices.

Now it is time to move to the next section and see what uv can do to make Python workflows faster.

Using UV to run everyday tools like Ansible, Glances, Autopep8

One of the best things about uv is that you can download and install tools on your account with less typing.

One of my favorite monitoring tools, glances, can be installed with pip on the user account:

pip install --user glances
glances

But that will pollute my Python user installation with glances dependencies. So the best next thing is to isolate it on a virtual environment:

python -m venv ~/venv/glances
. ~/venv/glances/bin/activate
pip install glances
glances

You can see now where this is going. Instead, I could do the following with uv:

uv tool run glances

That is a single line to run and install glances. This creates a temporary environment which can be discarded once we’re done with the tool.

Let me show you the equivalent command, it is called uvx:

uvx --from glances glances

If the command and the distribution match then we can skip explicitly where it comes ‘–from’:

uvx glances

Less typing, uv created a virtual environment for me and downloaded glances there. Now say that I want to use a different Python, version 3.12, to run it:

uvx --from glances --python 3.12 glances

If you call this command again, uvx will re-use the virtual environment it created, using the Python interpreter of your choice.

You just saw how uv allows you to install custom Python interpreters. This topic is covered in a bit more detail in the following section.

Is it a good idea to install custom Python interpreters?

Letting Developers and DevOps install custom Python interpreters can be a time-saver, given that no elevated privileges are required and the hassle of making an RPM to distribute a new Python is gone.

Consider, now, that you wish to use Python 3.13:

[josevnz@dmaf5 ~]$ uv python install 3.13
Installed Python 3.13.1 in 3.21s
 + cpython-3.13.1-linux-x86_64-gnu

Where was it installed? Let’s search for it and run it:

# It is not the system python3
[josevnz@dmaf5 ~]$ which python3
/usr/bin/python3

# And not in the default PATH
[josevnz@dmaf5 ~]$ which python3.13
/usr/bin/which: no python3.13 in (/home/josevnz/.cargo/bin:/home/josevnz/.local/bin:/home/josevnz/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/home/josevnz/.local/share/JetBrains/Toolbox/scripts)

# Let's find it (Pun intended)
[josevnz@dmaf5 ~]$ find ~/.local -name python3.13
/home/josevnz/.local/share/uv/python/cpython-3.13.1-linux-x86_64-gnu/bin/python3.13
/home/josevnz/.local/share/uv/python/cpython-3.13.1-linux-x86_64-gnu/include/python3.13
/home/josevnz/.local/share/uv/python/cpython-3.13.1-linux-x86_64-gnu/lib/python3.13

# Ah it is inside /home/josevnz/.local/share/uv/python, Let's run it:
[josevnz@dmaf5 ~]$ /home/josevnz/.local/share/uv/python/cpython-3.13.1-linux-x86_64-gnu/bin/python3.13
Python 3.13.1 (main, Jan 14 2025, 22:47:38) [Clang 19.1.6 ] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>

Interesting, a custom location that is not in the PATH, that allows you to mix and match Python versions.

Let’s see if uv can re-use installations now. Imagine, now, that I want to install the tool autopep8 (used to correct style issues on Python code) using Python 3.13:

[josevnz@dmaf5 ~]$ uv tool install autopep8 --python 3.13.1
Resolved 2 packages in 158ms
Prepared 2 packages in 72ms
Installed 2 packages in 8ms
 + autopep8==2.3.2
 + pycodestyle==2.12.1
Installed 1 executable: autopep8

Did the new autopep8 installation re-use the Python3.13 we installed before?

[josevnz@dmaf5 ~]$ which autopep8
~/.local/bin/autopep8
[josevnz@dmaf5 ~]$ head -n 1 ~/.local/bin/autopep8
#!/home/josevnz/.local/share/uv/tools/autopep8/bin/python
[josevnz@dmaf5 ~]$ ls -l /home/josevnz/.local/share/uv/tools/autopep8/bin/python
lrwxrwxrwx. 1 josevnz josevnz 83 Mar 22 16:50 /home/josevnz/.local/share/uv/tools/autopep8/bin/python -> /home/josevnz/.local/share/uv/python/cpython-3.13.1-linux-x86_64-gnu/bin/python3.13

Yes it did, very good, we are not wasting space with duplicate Python interpreter installations.

But what if we want to re-use the existing system python3? If we force the installation, will we have a duplicate (newly downloaded and existing system-wide installation)?

My system has Python 3.11, let’s force the autopep8 install and see what happens:

josevnz@dmaf5 ~]$ uv tool install autopep8 --force --python 3.11 
Resolved 2 packages in 3ms
Uninstalled 1 package in 1ms
Installed 1 package in 3ms
~ autopep8==2.3.2
Installed 1 executable: autopep8

# Where ia autopep8
[josevnz@dmaf5 ~]$ which autopep8
~/.local/bin/autopep8

# What python is used to run autopep8? Check the Shebang on the script
[josevnz@dmaf5 ~]$ head -n 1 ~/.local/bin/autopep8
#!/home/josevnz/.local/share/uv/tools/autopep8/bin/python3

# Where does that Python point to?
[josevnz@dmaf5 ~]$ ls -l /home/josevnz/.local/share/uv/tools/autopep8/bin/python3
lrwxrwxrwx. 1 josevnz josevnz 6 Mar 22 16:56 /home/josevnz/.local/share/uv/tools/autopep8/bin/python3 -> python
[josevnz@dmaf5 ~]$ ls -l /home/josevnz/.local/share/uv/tools/autopep8/bin/python
lrwxrwxrwx. 1 josevnz josevnz 19 Mar 22 16:56 /home/josevnz/.local/share/uv/tools/autopep8/bin/python -> /usr/bin/python3.11

uv is smart enough to use the system Python.

Now say that you want to make this Python3 version the default for your user. There is a way to do that using the experimental flags --preview (add to the PATH location) and --default (make a link to python3):

[josevnz@dmaf5 ~]$ uv python install 3.13 --default --preview
Installed Python 3.13.1 in 23ms
 + cpython-3.13.1-linux-x86_64-gnu (python, python3, python3.13)

# Which one is now python3
[josevnz@dmaf5 ~]$ which python3
~/.local/bin/python3

# Is python3.13 our default python3?
[josevnz@dmaf5 ~]$ which python3.13
~/.local/bin/python3.13

If you want to enforce a more strict control on what interpreters can be installed, you can create a $XDG_CONFIG_DIRS/uv/uv.toml or ~/.config/uv/uv.toml file and you can put the following settings there:

# Location: ~/.config/uv/uv.toml or /etc/uv/uv.toml
# https://docs.astral.sh/uv/reference/settings/#python-preference: only-managed, *managed*, system, only-system
python-preference = "only-system"
# https://docs.astral.sh/uv/reference/settings/#python-downloads: *automatic*, manual or never
python-downloads = "manual"

The Fedora managers had an interesting conversation about how set a more restrictive policy system-wide to prevent accidental interpreter installations. This is definitely worth reading as you may have a similar conversation within your company. The Fedora system uv.toml has those settings, system-wide.

To wrap up this section, let me show you how to remove an installed Python using uv:

[josevnz@dmaf5 docs]$ uv python uninstall 3.9
Searching for Python versions matching: Python 3.9
Uninstalled Python 3.9.21 in 212ms
 - cpython-3.9.21-linux-x86_64-gnu

Now it is time to go back to other time-saving features. Is there a way to type less when installing applications? Let’s find out in the next section.

Bash to the rescue

There is nothing ye old Bourne Shell (or your favorite shell) cannot fix. Put this on your ~/.profile or environment initialization configuration file:

#  Use a function instead of an alias (aliases were deprecated but still supported)
function glances {
uvx --from glances --python 3.12 glances $*
}

Another cool trick you can teach bash is to autocomplete your uv commands. Just set it up like this:

josevnz@dmaf5 docs]$ uv --generate-shell-completion bash > ~/.uv_autocomplete
[josevnz@dmaf5 docs]$ cat<<UVCONF>>~/.bash_profile
> if [[ -f ~/.uv_autocomplete ]]; then
>     . ~/.uv_autocomplete
> fi
> UVCONF
[josevnz@dmaf5 docs]$ . ~/.uv_autocomplete

Before you start writing functions for all your Python tools, I’ll show you an even better way to install them in our environment.

Consider installing your tool instead of running it with a transient deployment.

You probably use Ansible all the time to manage your infrastructure as code. And you don’t want to use uv or uvx to call it. It is time to install it:

uv tool install --force ansible
Resolved 10 packages in 17ms
Installed 10 packages in 724ms
 + ansible==11.3.0
 + ansible-core==2.18.3
 + jinja2==3.1.6
...

Now we can call it without using uv or uvx, as long as long as you add ~/.local/bin in your PATH environment variable. You can confirm if that is the case by using which:

which ansible-playbook
~/.local/bin/ansible-playbook

Another advantage of using ‘tool install‘ is that if the installation is big (like Ansible), or you have a slow network connection, you only need to install once, since it is cached locally and ready for use the next time.

The last trick for this section, is if you installed several Python tools using uv, you can upgrade them all in one shot with the --upgrade flag:

[josevnz@dmaf5 ~]$ uv tool upgrade --all
Updated glances v4.3.0.8 -> v4.3.1
 - glances==4.3.0.8
 + glances==4.3.1
Installed 1 executable: glances

This is pretty convenient!

We have seen, so far, how to manage someone else’s packages, what about our own? The next section explores that.

Managing your Python projects with UV

Eventually, you will find yourself packaging a Python project that has multiple modules, scripts and data files. Python offers a rich ecosystem to manage this scenario and uv takes away some of the complexity.

Our small demo project will create an application that will use the ‘Grocery Stores‘ data from the Connecticut Data portal. The data file is updated every week and is in JSON format. The application takes that data and displays it in a terminal as a table.

Uv init‘ allows me to initialize a basic project structure, which we will improve on shortly. I always like to start a project with a description and a name:

[josevnz@dmaf5]$ uv init --description 'Grocery Stores in Connecticut' grocery_stores
Initialized project `grocery_stores` at `/home/josevnz/tutorials/docs/Enhancing_Your_Python_Workflow_with_UV_on_Fedora/grocery_stores`

uv created a few files here:

[josevnz@dmaf5 Enhancing_Your_Python_Workflow_with_UV_on_Fedora]$ ls -a grocery_stores/
.  ..  hello.py  pyproject.toml  .python-version  README.md

The most important part, for now, is pyproject.toml. It has a full description of your project among other things:

[project]
name = "pretty-csv"
version = "0.1.0"
description = "Grocery Stores in Connecticut"
readme = "README.md"
requires-python = ">=3.13"
dependencies = []

Also created is .python-version which has the version of Python supported by this project. This is how uv enforces the Python version used in this project.

Another file is hello.py. You can get rid of it, it has a hello world in Python. We will also later fill the README.md with proper content.

Back to our script, we will use a TUI framework called Textual that will allow us to take the JSON file and show the contents as a table. Because we know that dependency, let’s use uv to add it to our project:

[josevnz@dmaf5 grocery_stores]$ uv add 'textual==2.1.2'
Using CPython 3.13.1
Creating virtual environment at: .venv
Resolved 11 packages in 219ms
Prepared 2 packages in 143ms
Installed 10 packages in 47ms
 + linkify-it-py==2.0.3
 + markdown-it-py==3.0.0
 + mdit-py-plugins==0.4.2
 + mdurl==0.1.2
 + platformdirs==4.3.7
 + pygments==2.19.1
 + rich==13.9.4
 + textual==2.1.2
 + typing-extensions==4.12.2
 + uc-micro-py==1.0.3

Three things happened:

  1. We downloaded textual and their transitive dependencies
  2. pyproject.toml was updated and now the dependencies section has values (go ahead and open the file) and see:
[project]
name = "pretty-csv"
version = "0.1.0"
description = "Simple program that shows contents of a CSV file as a table on the terminal"
readme = "README.md"
requires-python = ">=3.13"
dependencies = [
    "textual==2.1.2",
]
  1. uv created a uv.lock file next to the pyproject.toml. This file has the exact version of all the packages used in your project, which ensures consistency.
version = 1
requires-python = ">=3.13"

[[package]]
name = "linkify-it-py"
version = "2.0.3"
source = { registry = "https://pypi.org/simple" }
dependencies = [
    { name = "uc-micro-py" },
]
sdist = { url = "https://files.pythonhosted.org/packages/2a/ae/bb56c6828e4797ba5a4821eec7c43b8bf40f69cda4d4f5f8c8a2810ec96a/linkify-it-py-2.0.3.tar.gz", hash = "sha256:68cda27e162e9215c17d786649d1da0021a451bdc436ef9e0fa0ba5234b9b048", size = 27946 }
wheels = [
    { url = "https://files.pythonhosted.org/packages/04/1e/b832de447dee8b582cac175871d2f6c3d5077cc56d5575cadba1fd1cccfa/linkify_it_py-2.0.3-py3-none-any.whl", hash = "sha256:6bcbc417b0ac14323382aef5c5192c0075bf8a9d6b41820a2b66371eac6b6d79", size = 19820 },
]
...

You can see uv.lock is very explicit, as its purpose is to be as specific and unambiguous as possible. This file is meant to be added to your repository on git, same as ‘.python-version’ . It will allow developers across your team to have a consistent tool set installed.

Let’s also add the ‘httpx‘ library, so we can download the grocery data asynchronously:

[josevnz@dmaf5 pretty_csv]$ uv add 'httpx==0.28.1'
Resolved 18 packages in 229ms
Prepared 6 packages in 108ms
Installed 7 packages in 8ms
+ anyio==4.9.0
+ certifi==2025.1.31
+ h11==0.14.0
+ httpcore==1.0.7
+ httpx==0.28.1
+ idna==3.10
+ sniffio==1.3.1

These are runtime dependencies, but what if we want to use tools to do things like linting, or profiling? We will explore that in the next section.

Development dependencies

You may want to use some tools while developing your application, like pytest to run unit tests or pylint to check the correctness of the code. But you don’t want to deploy those tools in your final version of the application.

This is a development dependency, and you can add them to a special ‘–dev‘ section of your project like this :

[josevnz@dmaf5 grocery_stores]$ uv add --dev pylint==3.3.6 pytest==8.3.5
Resolved 29 packages in 15ms
Installed 10 packages in 19ms
 + astroid==3.3.9
 + dill==0.3.9
 + iniconfig==2.1.0
 + isort==6.0.1
 + mccabe==0.7.0
 + packaging==24.2
 + pluggy==1.5.0
 + pylint==3.3.6
 + pytest==8.3.5
 + tomlkit==0.13.2

This produces the following section on my pyproject.toml file:

[dependency-groups]
dev = [
    "pylint==3.3.6",
    "pytest==8.3.5",
]

Writing a JSON-to-Table display Python application

The first step is to have the code that loads the data, then renders the Grocery store raw data as a table. I will let you read the Textual tutorial on how to do this and instead will share the bulk of the code I wrote in a file called ‘groceries.py‘:

"""
Displays the latest Grocery Store data from
the Connecticut Data portal.
Author: Jose Vicente Nunez <kodegeek.com@protonmail.com>
Press ctrl+q to exit the application.
"""

import httpx
from httpx import HTTPStatusError
from textual.app import App, ComposeResult
from textual.widgets import DataTable, Header, Footer
from textual import work, on
from orjson import loads

GROCERY_API_URL = "https://data.ct.gov/resource/fv3p-tf5m.json"


class GroceryStoreApp(App):
    def compose(self) -> ComposeResult:
        header = Header(show_clock=True)
        yield header
        table = DataTable(id="grocery_store_table")
        yield table
        yield Footer()

    @work(exclusive=True)
    async def update_grocery_data(self) -> None:
        """
        Update the Grocery data table and provide some feedback to the user
        :return:
        """
        table = self.query_one("#grocery_store_table", DataTable)

        async with httpx.AsyncClient() as client:
            response = await client.get(GROCERY_API_URL)
            try:
                response.raise_for_status()
                groceries_data = loads(response.text)
                table.add_columns(*[key.title() for key in groceries_data[0].keys()])
                cnt = 0
                for row in groceries_data[1:]:
                    table.add_row(*(row.values()))
                    cnt += 1
                table.loading = False
                self.notify(
                    message=f"Loaded {cnt} Grocery Stores",
                    title="Data loading complete",
                    severity="information"
                )
            except HTTPStatusError:
                self.notify(
                    message=f"HTTP code={response.status_code}, message={response.text}",
                    title="Could not download grocery data",
                    severity="error"
                )

    def on_mount(self) -> None:
        """
        Render the initial component status, show an initial loading message
        :return:
        """
        table = self.query_one("#grocery_store_table", DataTable)
        table.zebra_stripes = True
        table.cursor_type = "row"
        table.loading = True
        self.notify(
            message=f"Retrieving information from CT Data portal",
            title="Loading data",
            severity="information",
            timeout=5
        )
        self.update_grocery_data()

    @on(DataTable.HeaderSelected)
    def on_header_clicked(self, event: DataTable.HeaderSelected):
        """
        Sort rows by column header
        """
        table = event.data_table
        table.sort(event.column_key)


if __name__ == "__main__":
    app = GroceryStoreApp()
    app.title = "Grocery Stores"
    app.sub_title = "in Connecticut"
    app.run()

Now that we have some code, let’s test it. First using an editable mode (in a way similar to using pip):

[josevnz@dmaf5 grocery_stores]$ uv pip install --editable .
Resolved 18 packages in 105ms
   Built grocery-stores @ file:///home/josevnz/tutorials/docs/Enhancing_Your_Python_Workflow_with_UV_on_Fedora/grocery_stores
Prepared 18 packages in 1.07s
Uninstalled 18 packages in 87ms
Installed 18 packages in 53ms
 ~ anyio==4.9.0
 ~ certifi==2025.1.31
 ~ grocery-stores==0.1.0 (from file:///home/josevnz/tutorials/docs/Enhancing_Your_Python_Workflow_with_UV_on_Fedora/grocery_stores)
 ~ h11==0.14.0
 ~ httpcore==1.0.7
 ~ httpx==0.28.1
 ~ idna==3.10
 ~ linkify-it-py==2.0.3
 ~ markdown-it-py==3.0.0
 ~ mdit-py-plugins==0.4.2
 ~ mdurl==0.1.2
 ~ platformdirs==4.3.7
 ~ pygments==2.19.1
 ~ rich==13.9.4
 ~ sniffio==1.3.1
 ~ textual==2.1.2
 ~ typing-extensions==4.12.2
 ~ uc-micro-py==1.0.3

Now run our groceries store application using uv. Uv will pick up our local installation and use it:

uv run groceries.py

The application looks more or less like this:

The grocery store application was written with Textual. Not bad for a few lines of code.

Time to see next how we can lint and unit test our new grocery store application

Linting code with pylint:

We use pylint as follows (I like to pin the version to avoid unwanted warnings due to API changes):

[josevnz@dmaf5 grocery_stores]$ uv run --with 'pylint==3.3.6' pylint groceries.py 
************* Module groceries
groceries.py:15:0: C0115: Missing class docstring (missing-class-docstring)
groceries.py:25:8: W0612: Unused variable 'table' (unused-variable)
groceries.py:27:12: W0612: Unused variable 'response' (unused-variable)
groceries.py:29:4: C0116: Missing function or method docstring (missing-function-docstring)
groceries.py:10:0: W0611: Unused work imported from textual (unused-import)

------------------------------------------------------------------
Your code has been rated at 7.73/10 (previous run: 7.73/10, +0.00)

Fix the issues, run the tests again:

[josevnz@dmaf5 grocery_stores]$ uv run --with 'pylint==3.3.6' pylint groceries.py

-------------------------------------------------------------------
Your code has been rated at 10.00/10 (previous run: 9.04/10, +0.96)

Running unit tests with pytest

My textual app uses async so it requires a little bit of support from pytest. Not a problem:

[josevnz@dmaf5 grocery_stores]$ uv add --dev pytest_asyncio
[josevnz@dmaf5 grocery_stores]$ uv run --dev pytest test_groceries.py
======================================================================================================================= test session starts ========================================================================================================================
platform linux -- Python 3.13.1, pytest-8.3.5, pluggy-1.5.0
rootdir: /home/josevnz/tutorials/docs/Enhancing_Your_Python_Workflow_with_UV_on_Fedora/grocery_stores
configfile: pyproject.toml
plugins: anyio-4.9.0, asyncio-0.25.3
asyncio: mode=Mode.STRICT, asyncio_default_fixture_loop_scope=None
collected 1 item                                                                                                                                                                                                                                                   

test_groceries.py .                                                                                                                                                                                                                                          [100%]

======================================================================================================================== 1 passed in 0.43s =========================================================================================================================

My test code just simulates starting the application and pressing ctrl-q to exit it. Not very useful but this next test gives you an idea what you can to do to test your application simulating keystrokes:

"""
Unit tests for Groceries application
https://textual.textualize.io/guide/testing/
"""
import pytest

from grocery_stores_ct.groceries import GroceryStoreApp


@pytest.mark.asyncio
async def test_groceries_app():
    groceries_app = GroceryStoreApp()
    async with groceries_app.run_test() as pilot:
        await pilot.press("ctrl+q")  # Quit

Now run the tests:


[josevnz@dmaf5 grocery_stores]$ uv run --dev pytest test_groceries.py

================================================ test session starts =================================================
platform linux -- Python 3.13.1, pytest-8.3.5, pluggy-1.5.0
rootdir: /home/josevnz/tutorials/docs/Enhancing_Your_Python_Workflow_with_UV_on_Fedora/grocery_stores
configfile: pyproject.toml
plugins: asyncio-0.25.3, anyio-4.9.0
asyncio: mode=Mode.STRICT, asyncio_default_fixture_loop_scope=None
collected 1 item

test/test_groceries.py . [100%]

================================================= 1 passed in 1.17s ==================================================

Packaging and uploading to your Artifact repository

It is time to package our new application. Let’s try to build it:

[josevnz@dmaf5 grocery_stores]$ uv build
Building source distribution...
error: Multiple top-level modules discovered in a flat-layout: ['groceries', 'test_groceries'].

To avoid accidental inclusion of unwanted files or directories,
setuptools will not proceed with this build.
...

Not so fast. uv is getting confused as we have 2 main modules, instead of one. The right thing to do is to setup a src-layout for our project, so we move some files around.

After moving groceries.py to a module called ‘src/grocery_stores_ct’ and tests_groceries to test:

[josevnz@dmaf5 grocery_stores]$ tree
.
├── pyproject.toml
├── README.md
├── src
│   ├── grocery_stores_ct
│   │   ├── groceries.py
│   │   └── __init__.py
│   └── grocery_stores.egg-info
│       ├── dependency_links.txt
│       ├── PKG-INFO
│       ├── requires.txt
│       ├── SOURCES.txt
│       └── top_level.txt
├── test
│   └── test_groceries.py
└── uv.lock

Re-test it, lint-it:

uv pip install --editable .[dev]
uv run --dev pytest test/test_groceries.py
uv run --with 'pylint==3.3.6' pylint src/grocery_stores_ct/groceries.py

And now build it again:

[josevnz@dmaf5 grocery_stores]$ uv build
Building source distribution...
running egg_info
writing src/grocery_stores.egg-info/PKG-INFO
writing dependency_links to src/grocery_stores.egg-info/dependency_links.txt
removing build/bdist.linux-x86_64/wheel
Successfully built dist/grocery_stores-0.1.0.tar.gz
Successfully built dist/grocery_stores-0.1.0-py3-none-any.whl

Now comes the time when you want to share your application with others.

Uploading to a custom index

I don’t want to pollute the real pypi.org with a test application, so instead I will set my index to be something else, like test.pypi.org. In your case this can be a Nexus 3 repository, an Artifactory repository, or whatever artifact repository you have set up in your company.

For pypi, add the following to your pyproject.toml file:

# URL match your desired location
[[tool.uv.index]]
name = "testpypi"
url = "https://test.pypi.org/simple/"
publish-url = "https://test.pypi.org/legacy/"
explicit = true

You will also need to generate an application token (this varies by provider and won’t be covered here). Once you get your token, call uv publish --index testpypi $token:

[josevnz@dmaf5 grocery_stores]$ uv publish --index testpypi --token pypi-AgENdGVzdC5weXBpLm9yZwIkYzFkODg5ODMtODUxZS00ODc2LWFhYzMtZjhhNWFmNjZhODJmAAIqWzMsIjZmZGNjMzc1LTYxNmEtNDA5Zi1hNTJkLWJhMDZmNWQ3N2NlZSJdAAAGIG3wrTZdgmOBlahBlahBlah 
warning: `uv publish` is experimental and may change without warning
Publishing 2 files https://test.pypi.org/legacy/
Uploading grocery_stores-0.1.0-py3-none-any.whl (2.7KiB)
Uploading grocery_stores-0.1.0.tar.gz (2.5KiB)

Other things that you should have on your pyproject.toml

UV does a lot of things but doesn’t do everything. There is a lot of extra Metadata that you should have on your pyproject.toml file. I’ll share some of the essentials here:

[project]
authors = [
{name = "Jose Vicente Nunez", email = "kodegeek.com@protonmail.com"}
]
maintainers = [
{name = "Jose Vicente Nunez", email = "kodegeek.com@protonmail.com"}
]
license = "MIT AND (Apache-2.0 OR BSD-2-Clause)"
keywords = ["ct", "tui", "grocery stores", "store"]
classifiers = [
"Development Status :: 4 - Beta",
"Intended Audience :: End Users/Desktop",
"Topic :: Desktop Environment",
"Programming Language :: Python :: 3.13",
]
[project.urls]
Homepage = "https://github.com/josevnz/tutorials"
Repository = "https://github.com/josevnz/tutorials.git"

A few things before wrapping this section:

  • You can see the full list of classifiers here.
  • If you do not want a project to be uploaded to Pypi by accident, add the following classifier: ‘Private :: Do Not Upload‘.
  • You will need to bump the version, rebuild and upload again after making any changes, like adding keywords (useful to tell the world where to find your app).

Inline script metadata, self contained scripts

Python has a feature, PEP-0723, that allows it to incorporate metatada embedded in the script, like this:

# /// script
# requires-python = ">=3.13"
# dependencies = [
# "httpx==0.28.1",
# "orjson==3.10.15",
# "textual==2.1.2",
# ]
# ///

# ... Omitted rest of the code

These 8 lines at the begining of the script indicates that this is the embedded metadata.

If you remember our pyproject.toml file, these are the instructions used by package managers like setuptools and uv to handle the project dependencies, like python versions and required libraries to run. This is powerful, since tools capable of reading this inline metadata (between the `///` sections) do not need to check an extra file.

Now, uv (has a flag) called `–script` which allows it to interpret the inline metadata on the script. For example, this will update the dependencies for the `example` script, by reading them from the script directly:

uv add --script example.py 'requests<3' 'rich'
uv run example.py


This is convenient. If we combine both inline dependencies and uv we can have a self executable script that can also download its own dependencies:

#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.13"
# dependencies = [
# "httpx==0.28.1",
# "orjson==3.10.15",
# "textual==2.1.2",
# ]
# ///
"""
Displays the latest Grocery Store data from
the Connecticut Data portal.
Author: Jose Vicente Nunez <kodegeek.com@protonmail.com>
This version of the script uses inline script metadata:
https://packaging.python.org/en/latest/specifications/inline-script-metadata/
Press ctrl+q to exit the application.
"""

import httpx
from httpx import HTTPStatusError
from textual.app import App, ComposeResult
from textual.widgets import DataTable, Header, Footer
from textual import work, on
# pylint: disable=no-name-in-module
from orjson import loads

GROCERY_API_URL = "https://data.ct.gov/resource/fv3p-tf5m.json"


class GroceryStoreApp(App):
"""
TUI application that shows grocery stores in CT
"""
current_sorts: set = set()

def compose(self) -> ComposeResult:
header = Header(show_clock=True)
yield header
table = DataTable(id="grocery_store_table")
yield table
yield Footer()

@work(exclusive=True)
async def update_grocery_data(self) -> None:
"""
Update the Grocery data table and provide some feedback to the user
:return:
"""
table = self.query_one("#grocery_store_table", DataTable)

async with httpx.AsyncClient() as client:
response = await client.get(GROCERY_API_URL)
try:
response.raise_for_status()
groceries_data = loads(response.text)
table.add_columns(*[key.title() for key in groceries_data[0].keys()])
cnt = 0
for row in groceries_data[1:]:
table.add_row(*(row.values()))
cnt += 1
table.loading = False
self.notify(
message=f"Loaded {cnt} Grocery Stores",
title="Data loading complete",
severity="information"
)
except HTTPStatusError:
self.notify(
message=f"HTTP code={response.status_code}, message={response.text}",
title="Could not download grocery data",
severity="error"
)

def on_mount(self) -> None:
"""
Render the initial component status
:return:
"""
table = self.query_one("#grocery_store_table", DataTable)
table.zebra_stripes = True
table.cursor_type = "row"
table.loading = True
self.notify(
message="Retrieving information from CT Data portal",
title="Loading data",
severity="information",
timeout=5
)
self.update_grocery_data()

def sort_reverse(self, sort_type: str):
"""
Determine if `sort_type` is ascending or descending.
"""
reverse = sort_type in self.current_sorts
if reverse:
self.current_sorts.remove(sort_type)
else:
self.current_sorts.add(sort_type)
return reverse

@on(DataTable.HeaderSelected)
def on_header_clicked(self, event: DataTable.HeaderSelected):
"""
Sort rows by column header
"""
table = event.data_table
table.sort(
event.column_key,
reverse=self.sort_reverse(event.column_key.value)
)


if __name__ == "__main__":
app = GroceryStoreApp()
app.title = "Grocery Stores"
app.sub_title = "in Connecticut"
app.run()

This is the same script we wrote before, except that we use the last big of magic here:

!/usr/bin/env -S uv run --script

We call env (part of coreutils) to split arguments (-S) to call uv with the --script flag. Then uv reads the inline metadata and downloads the required python with all the dependencies automatically:

[josevnz@dmaf5 Enhancing_Your_Python_Workflow_with_UV_on_Fedora]$ chmod a+xr inline_script_metadata/groceries.py
[josevnz@dmaf5 Enhancing_Your_Python_Workflow_with_UV_on_Fedora]$ ./inline_script_metadata/groceries.py
Installed 18 packages in 29ms
# And here the script starts running!!!

It doesn’t get simpler than this. This is great, for example, to run installer scripts.

Learning more

A lot of material is covered here but there is still more to learn. As with everything, you will need to try to see what better fits your style and available resources.

Below is a list of links I found useful and may also help you:

  • The official uv documentation is very complete, and you will most likely spend your time going back and forth reading it.
  • Users of older Fedora distributions may take a look at the UV Source RPM. Lots of good stuff, including Bash auto-completion for UV.
  • Anaconda and miniconda also have counter parties written in rust (mamba and micromamba), in case you decide jumping to uv is too soon. These are backward compatible and much faster.
  • Did you remember the file uv.lock we discussed before? Now Python has agreed to a way to manage dependencies (PEP 751) in a much more powerful way than the pip requirements.txt file. Keep an eye on packaging.python.org for more details.
  • I showed you how to use pylint to check for code smells. I would strongly recommend you try ruff. It is written in rust and it is pretty fast:
[josevnz@dmaf5 grocery_stores]$ uv tool install ruff@latest
Resolved 1 package in 255ms
Prepared 1 package in 1.34s
Installed 1 package in 4ms
ruff==0.11.2
Installed 1 executable: ruff
# The lets check the code
[josevnz@dmaf5 grocery_stores]$ ruff check src/grocery_stores_ct
All checks passed!

Remember: “perfect is the enemy of good”, so try uv and other tools and see what is best for your Python workflow needs.

Infra and RelEng Update – Week 13 2025

Posted by Fedora Community Blog on 2025-03-28 10:00:00 UTC

This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.

Week: 24 Mar – 28 Mar 2025

Infrastructure & Release Engineering

The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
List of planned/in-progress issues

Fedora Infra

CentOS Infra including CentOS CI

Release Engineering

If you have any questions or feedback, please respond to this report or contact us on #redhat-cpe channel on matrix.

The post Infra and RelEng Update – Week 13 2025 appeared first on Fedora Community Blog.

Contribute to Fedora 42 KDE, Virtualization, and Upgrade Test Days

Posted by Fedora Magazine on 2025-03-28 08:00:00 UTC

Fedora test days are events where anyone can help make certain that changes in Fedora Linux work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed to Fedora before, this is a perfect way to get started.

There are three test periods occurring in the coming days:

  • Monday March 31 through April 7, is to test the KDE Desktop and Apps
  • Wednesday April 2 through April 6, is to test Upgrade Test Days
  • Saturday April 5 through April 7, is to test Virtualization

Come and test with us to make Fedora 42 even better. Read more below on how to do it.

KDE Plasma and Apps

The KDE SIG is working on final integration for Fedora 42. Some of the app versions were recently released and will soon arrive in Fedora Linux 42. As a result, the KDE SIG and QA teams have organized a test week from Monday, March 31, 2025, through Monday, April 07, 2025. The wiki page contains links to the test images you’ll need to participate.

Upgrade test day

As we approach the Fedora Linux 42 release date, it’s time to test upgrades. This release has many changes, and it becomes essential that we test the graphical upgrade methods as well as the command-line methods.

This test period will run from Wednesday, April 2 through Sunday, April 6. It will test upgrading from a fully updated F40 or F41 to F42 for all architectures (x86_64, ARM, aarch64) and variants (WS, cloud, server, silverblue, IoT). See this wiki page for information and details. For this test period, we also want to test DNF5 Plugins before and after upgrade. Recently noted regressions resulted in a Blocker Bug. The DNF5 Plugin details are available here.

Virtualization test day

This test period will run from Saturday, April 5 through Monday, April 7 and will test all forms of virtualization possible in Fedora 42. The test period will focus on testing Fedora Linux, or your favorite distro, inside a bare metal implementation of Fedora Linux running Boxes, KVM, VirtualBox and whatever you have. The test cases outline the general features of installing the OS and working with it. These cases are available on the results page.

How do test days work?

A test period is an event where anyone can help make certain that changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. Test days are the perfect way to start contributing if you not in the past.

The only requirement to get started is the ability to download test materials (which include some large files) and then read and follow directions step by step.

Detailed information about all the test days are on the wiki page links provided above. If you are available on or around the days of the events, please do some testing and report your results

Updates and Reboots

Posted by Fedora Infrastructure Status on 2025-03-26 21:00:00 UTC

We will be applying updates to all our servers and rebooting into newer kernels. Services will be up or down during the outage window.

End of OpenID authentication in Fedora Account System

Posted by Fedora Magazine on 2025-03-26 16:20:42 UTC

The Fedora Infrastructure Team is announcing the end of OpenID in Fedora Account System (FAS). This will occur on 20th May 2025.

Why the change?

OpenID is being replaced by OpenIDConnect (OIDC) in most of the modern web and most of the Fedora infrastructure is already using OIDC as the default authentication method. OIDC offers better security by handling both authentication and authorization. It also allows us to have more control over services that are using Fedora Account System (FAS) for authentication.

What will change for you?

With the End Of Life of OpenID we will switch to OIDC for everything and no longer support authentication with OpenID.

If your web or service is already using OIDC for authentication nothing will change for you. If you are still using OpenID open a ticket on Fedora Infrastructure issue tracker and we will help you with migration to OIDC

For users using FAS as authentication option there should be no change at all.

How to check if a service you maintain is using OpenID?

You may quickly check if your service is using OpenID for FAS authentication by looking at where you are redirected when logging in with FAS.

If you are redirected to https://id.fedoraproject.org/openidc/Authorization you are already using OIDC and you can just ignore this announcement.

If you are being redirected to https://id.fedoraproject.org/openid you are still using the OpenID authentication method. You should open a ticket on Fedora Infrastructure issue tracker so we can help you with migration.

What will happen now?

We will be reaching out directly to services we identify as using OpenID. But since we don’t have control over OpenID authentication, we can’t identify everyone.

If you are interested in following this work feel free to watch this ticket.

Infra and RelEng Update – Week 12

Posted by Fedora Community Blog on 2025-03-21 13:41:11 UTC

This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.

Week: 17 Mar – 21 Mar 2025

Infrastructure & Release Engineering

The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
List of planned/in-progress issues

Fedora Infra

CentOS Infra including CentOS CI

Release Engineering

If you have any questions or feedback, please respond to this report or contact us on #redhat-cpe channel on matrix.

The post Infra and RelEng Update – Week 12 appeared first on Fedora Community Blog.

Contribute at the Fedora 42 CoreOS Test Week

Posted by Fedora Magazine on 2025-03-21 08:00:00 UTC

The Fedora 42 CoreOS Test Week focuses on testing FCOS based on Fedora 42. The FCOS next stream has been rebased on Fedora 42 content. This will be coming soon to testing and stable. To prepare for the content being promoted to other streams, the Fedora CoreOS and QA teams have organized test days from Monday, 24 March through Friday, 28 March. Refer to the wiki page for links to the test cases and materials you’ll need to participate. The FCOS and QA team will meet and communicate with the community asynchronously over multiple matrix/element channels. The announcement has other details covered!

How does a test day work?

A test day is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to do the following things:

  • Download test materials, which include some large files
  • Read and follow directions step by step

The wiki page for the test day has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test day web application. If you’re available on or around the day of the event, please do some testing and report your results.

Happy testing, and we hope to see you on test day.

End of OpenID authentication in Fedora Account System

Posted by Fedora Community Blog on 2025-03-20 10:00:00 UTC

On the latest Fedora Infrastructure weekly meeting we decided on a date of OpenID authentication sunset. The date is 20th May 2025.

Why the change?

The OpenID is being replaced by OpenIDConnect (OIDC) in most of the modern web and most of the Fedora infrastructure is already using OIDC as the default authentication method. OIDC offers us better security by handling both authentication and authorization. It also allows us to have more control over services that are using Fedora Account System (FAS) for authentication.

What will change for you?

With the End Of Life of OpenID we will switch to OIDC for everything and no longer support authentication with OpenID. If your web or service is already using OIDC for authentication nothing will change for you. If you are still using OpenID open a ticket on Fedora Infrastructure issue tracker and we will help you with migration to OIDC. For users using FAS as authentication option there should be no change at all.

What will happen now?

We will be reaching to services we identified as using OpenID directly, but as we don’t have control over OpenID authentication we can’t identify everyone.

If you are interested in following this work feel free to watch this ticket.

The post End of OpenID authentication in Fedora Account System appeared first on Fedora Community Blog.

How to rebase to Fedora Silverblue 42 Beta

Posted by Fedora Magazine on 2025-03-18 18:00:00 UTC

Silverblue is an operating system for your desktop built on Fedora Linux. It’s excellent for daily use, development, and container-based workflows. It offers numerous advantages such as being able to roll back in case of any problems. This article provides the steps to upgrade to the newly released Fedora Linux 42 Beta, and how to revert if anything unforeseen happens.

Before attempting an upgrade to the Fedora Linux 42 Beta, apply any pending upgrades.

Updating using terminal

Because the Fedora LInux 42 Beta is not available in GNOME Software, the whole upgrade must be done through a terminal.

First, check if the 42 branch is available, which should be true now:

$ ostree remote refs fedora

You should see the following line in the output:

fedora:fedora/42/x86_64/silverblue

If you want to pin the current deployment (this deployment will stay as an option in GRUB until you remove it), you can do it by running:

# 0 is entry position in rpm-ostree status
$ sudo ostree admin pin 0

To remove the pinned deployment use following command (2 corresponds to the entry position in the output from rpm-ostree status ):

$ sudo ostree admin pin --unpin 2

Next, rebase your system to the Fedora 42 branch.

$ rpm-ostree rebase fedora:fedora/42/x86_64/silverblue

Finally, the last thing to do is restart your computer and boot to Fedora Silverblue 42 Beta.

How to revert

If anything bad happens — for instance, if you can’t boot to Fedora Silverblue 42 Beta at all — it’s easy to go back. Pick the previous entry in the GRUB boot menu (you need to press ESC during boot sequence to see the GRUB menu in newer versions of Fedora Silverblue), and your system will start in its previous state. To make this change permanent, use the following command:

$ rpm-ostree rollback

That’s it. Now you know how to rebase to Fedora Silverblue 42 Beta and fall back. So why not do it today?

Known Issues

FAQ

Because there are similar questions in comments for each blog about rebasing to newer version of Silverblue I will try to answer them in this section.

Question: Can I skip versions during rebase of Fedora Linux? For example from Fedora Silverblue 40 to Fedora Silverblue 42?

Answer: Although it could be sometimes possible to skip versions during rebase, it is not recommended. You should always update to one version above (40->41 for example) to avoid unnecessary errors.

Question: I have rpm-fusion layered and I got errors during rebase. How should I do the rebase?

Answer: If you have rpm-fusion layered on your Silverblue installation, you should do the following before rebase:

rpm-ostree update --uninstall rpmfusion-free-release --uninstall rpmfusion-nonfree-release --install rpmfusion-free-release --install rpmfusion-nonfree-release

After doing this you can follow the guide in this article.

Question: Could this guide be used for other ostree editions (Fedora Atomic Desktops) as well like Kinoite, Sericea (Sway Atomic), Onyx (Budgie Atomic),…?

Yes, you can follow the Rebasing using the terminal part of this guide for every ostree edition of Fedora. Just use the corresponding branch. For example for Kinoite use fedora:fedora/42/x86_64/kinoite

Announcing Fedora Linux 42 Beta

Posted by Fedora Magazine on 2025-03-18 14:05:00 UTC

The Fedora Project is pleased to announce the availability of Fedora Linux 42 Beta! We have lots to share with you about our upcoming release of Fedora Linux 42, and we want to give you a sneak preview of what’s in this release in the beta version that is out now.

Get the the pre-release of any of our editions from our project website:

You can also update an existing system to the beta using DNF system-upgrade.

Beta Release Highlights

New Edition Alert

KDE Plasma Desktop has been promoted to edition status starting with Fedora Linux 42 Beta! You can expect to continue to enjoy the same level of quality from Fedora KDE Plasma Desktop that you always have. In addition, Fedora KDE Plasma Desktop is now supported on Power Systems (ppc64le). Also the full KDE stack (including KDE PIM) is now available on Power and we have installable live images for OpenPOWER based systems like the Talos Workstation from Raptor Systems.

Fedora COSMIC Spin

We also have a brand new Spin in Fedora Linux 42 Beta – introducing the Fedora COSMIC spin! This new Rust-based desktop environment developed by System76, makers of Pop!_OS. COSMIC has many unique features, such as hybrid per-workspace window/tiling management, window stacks with tabs to switch between windows, and robust customization features that integrate with GTK and (later on) Qt!

Anaconda Changes

Anaconda has some pretty significant changes in Fedora Linux 42 Beta. They have introduced a new Web UI that is now the default for Fedora Workstation. This means that users can enjoy a smooth installation experience, with features such as an installation progress indicator, built in help, configuration review and more. This new feature also includes Wizard which will allow users to skip what they don’t need during installation. 

The Anaconda team has launched a new web UI for partitioning in Fedora Linux 42 Beta. With this new feature, the biggest benefit to Fedora users is the new guided partitioning function. This provides a more powerful automatic partitioning, where the user will select a goal and have additional customizations possible. This change also comes with a new “Reinstall Fedora” option which allows users to easily reinstall their system if something goes wrong. It also adds support for dual-boot installation. Users just need to create some free space and don’t have to understand other details.

Some updates to enjoy in Fedora Linux 42 Beta

This release will include the latest upstream release of python-setuptools. Setuptools is a package development process library designed to facilitate packaging Python projects. It enhances the former Python standard library distutils (distribution utilities).

There is also a DNF5 improvement that includes new logic that will remove expired and obsolete repository keys from the system. This means users can enjoy the automatic management of repository keys during software installation or upgrades.

We are also including the newest version of Ruby with this beta release. Ruby 3.4 is the latest stable version of Ruby. Many new features and improvements are included for the increasingly diverse and expanding demands for Ruby. With this major update from Ruby 3.3 in Fedora Linux 41 to Ruby 3.4 in Fedora Linux 42, Fedora Linux becomes the superior Ruby development platform.

In Fedora Workstation, we have also introduced the SDL3 transition and Wayland-by-default for SDL apps, and included the new GNOME well-being feature.

There are a lot more changes coming in Fedora Linux 42. The above is just a snippet! Please check out the Fedora Linux 42 Change Set page for a complete list of the changes included with this OS release.

Testing needed

As with any beta release, we expect that you may encounter bugs or missing features. To report issues encountered during testing, contact the Fedora Quality team via the test mailing list or in the #quality channel on Fedora Chat. As testing progresses, common issues are tracked in the “Common Issues” category on Ask Fedora.

For tips on reporting a bug effectively, read how to file a bug.

What is the beta release?

A beta release is code-complete and bears a very strong resemblance to the final release. If you take the time to download and try out the beta, you can check and make sure the things that are important to you are working. Every bug you find and report doesn’t just help you, it improves the experience of millions of Fedora Linux users worldwide! Together, we can make Fedora rock-solid. We have a culture of coordinating new features and pushing fixes upstream as much as we can. Your feedback improves not only Fedora Linux, but the Linux ecosystem and free software as a whole.


Comments are welcome on discussion.fedoraproject.org. For tech support, please use ask.fedoraproject.org.

Announcing Fedora Asahi Remix 42 Beta

Posted by Fedora Magazine on 2025-03-18 14:03:53 UTC

We are happy to announce the availability of Fedora Asahi Remix 42 Beta. This pre-release will bring the freshly announced Fedora Linux 42 Beta to Apple Silicon Macs. We expect to announce general availability of Fedora Asahi Remix 42 in about a month. This will coincide with the overall Fedora Linux 42 release.

Fedora Asahi Remix is developed in close collaboration with the Fedora Asahi SIG and the Asahi Linux project. Fedora Asahi Remix 42 Beta includes all of the Changes from Fedora Linux 42. One change of note for Apple Silicon Macs is the integration of FEX in Fedora Linux. This provides an easier way to run x86 and x86-64 binaries out of the box via emulation.

You can try out Fedora Asahi Remix 42 Beta today by following our installation guide. Existing systems, running Fedora Asahi Remix 40 or 41, can be updated following the usual Fedora upgrade process. Upgrades via Fedora Workstation’s Software application are unfortunately not supported and DNF’s System Upgrade plugin has to be used.

Since this is a beta release, we expect that you may encounter bugs or missing features. Please report any Remix-specific issues in our tracker. You may also reach out in our Discourse forum or our Matrix room for user support.

Infra and RelEng Update – Week 11 2025

Posted by Fedora Community Blog on 2025-03-14 10:00:00 UTC

This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.

Week: 10 – 14 March 2025

Infrastructure & Release Engineering

The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
List of planned/in-progress issues

Fedora Infra

CentOS Infra including CentOS CI

Release Engineering

List of new releases of apps maintained by I&R Team

If you have any questions or feedback, please respond to this report or contact us on #redhat-cpe channel on matrix.

The post Infra and RelEng Update – Week 11 2025 appeared first on Fedora Community Blog.

Fedora Community Ops 2024 Reboot: A Retrospective

Posted by Fedora Magazine on 2025-03-10 08:00:00 UTC

The Fedora Community Operations (CommOps) Initiative, formally titled “Community Ops 2024 Reboot,” ran from late 2023 to December 2024, aiming to bolster community support within the Fedora Project. This initiative demonstrated a strong interest within the Fedora contributor community to engage not only in operational tasks but also in exploring Fedora’s data and understanding community trends. While the “Community Ops 2024 Reboot” didn’t generate a massive amount of immediate change, it successfully re-established community operations as a key area of focus within the Fedora Project. This post summarizes the key achievements and areas for growth.

Leveraging Fedora Infrastructure: Paving the Way for Data Exploration

The “Community Ops 2024 Reboot” initiative effectively utilized Fedora Infrastructure, gaining access to crucial resources like the PostgreSQL database for Datanommer and deploying a Business Intelligence (BI) platform on AWS Cloud. Critically, the initiative also focused on refining the process for community members to work with public Fedora data. Currently, this process is often opaque, difficult, and time-consuming. While the modernization work is ongoing, the initiative laid the groundwork for creating common, accessible pathways that any contributor can follow in the future. This effort aims to democratize access to Fedora data, fostering more data experiments and deeper insights into our contributor community.

Process Improvement: A Mixed Bag

Process improvement efforts under the “Community Ops 2024 Reboot” saw both successes and challenges. A new Standard Operating Procedure (SOP) for virtual Fedora events was developed, aiming to streamline event organization. However, implementation revealed unforeseen complexities, including significant manual effort and reliance on the Fedora Community Architect. This was reflected in the contrasting outcomes of the Fedora Linux Release Parties for versions 40 and 41. While the former was successfully executed, the latter faced last-minute challenges that impacted smooth execution. Although documentation for the Join SIG process and contributor recognition efforts through Community Blog series and Fedora Badges were planned, they were not completed within the initiative’s timeframe. This was not due to a lack of importance, but rather because the team prioritized establishing an onboarding pipeline for CommOps members and defining the team’s scope and purpose, given the available community contributors.

Community Social Analysis: Laying the Foundation

Despite limited resources, the “Community Ops 2024 Reboot” team made progress in Community Social Analysis. Initial governance needs were defined, and key metrics were documented to facilitate discussions and establish common terminology. This work lays the groundwork for standardized data governance within Fedora. A Pandas-based analysis solution for the Fedora Message Bus was deployed, providing some initial insights. However, this solution lacked repeatability and equitable access, highlighting the need for more robust and scalable data tools in the future.

Key Outcomes and Deliverables

The “Community Ops 2024 Reboot” initiative achieved several significant milestones, while also identifying areas for future development:

Process Improvement

  • Developed and partially implemented a new SOP for virtual Fedora events.
  • Improved documentation for newcomer onboarding and updated CommOps processes in repositories.
  • Successfully executed the Fedora Linux 40 Release Party; experienced challenges with the Fedora Linux 41 Release Party.

Community Social Analysis

  • Defined initial governance needs and documented basic metrics.
  • Deployed a Pandas-based analysis solution for Message Bus data.
  • Created a preliminary data dictionary and established a foundation for future data infrastructure.

Engagement and Recognition

  • Fostered community engagement by creating dedicated spaces and facilitating regular meetings.

Reporting and Communication

  • Provided periodic updates to the Fedora Council (though less frequently than initially planned).
  • Prepared a final initiative report outlining successes, challenges, and recommendations.

Looking Ahead

The “Community Ops 2024 Reboot” initiative has provided valuable insights into how to better support the Fedora community.

The work done on process improvement, while facing some obstacles, has led to more defined release party structures, including issue templates and some established processes. However, it’s clear that reducing reliance on key individuals is crucial for scalability.

In Community Social Analysis, the initiative identified critical data points for measuring user engagement by topic, aligning with the Fedora 2028 Strategy’s goal of doubling contributors. The team also successfully launched community engagement efforts by creating dedicated spaces and facilitating regular meetings. The critical groundwork laid for easier access to Fedora data will empower more community members to explore and understand our project.

The next steps involve building on these achievements, addressing the identified challenges, and continuing to empower the Fedora community. Thank you to all the CommOps members for their contributions to this important initiative!

Infra and RelEng Update – Week 10

Posted by Fedora Community Blog on 2025-03-07 10:00:00 UTC

This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.

Week: 3rd Mar – 7th Mar 2025

Infrastructure & Release Engineering

The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
List of planned/in-progress issues

Fedora Infra

CentOS Infra including CentOS CI

Release Engineering

If you have any questions or feedback, please respond to this report or contact us on #redhat-cpe channel on matrix.

The post Infra and RelEng Update – Week 10 appeared first on Fedora Community Blog.

How to install MediaWiki on Fedora, CentOS, and RHEL servers.

Posted by Fedora Magazine on 2025-03-07 08:00:00 UTC

Introduction

Ready to run your own Wikipedia-style knowledge repository on Fedora, Centos or RHEL?

Hold on tight! We are about to walk you through the steps for installing MediaWiki on the most innovative Linux distro out there.

Whether you are a Linux geek or a Linux noob looking to get started with MediaWiki, this step-by-step guide will take you from start to finish, pronto!

MediaWiki ?

MediaWiki facilitates collaboration and documentation in many reputable organizations around the world. Some worthy of mention are:

  •  Moodle
  • Blender,
  • BogleHeads
  • National Gallery of Arts
  • OpenStreetMaps
  • OpenOffice
  • University of Buffalo
  • The Nielsen Company.

MediaWiki is used in place of Microsoft’s SharePoint, as well. It is preferred for its less complicated RBAC (Role-Based Access Control) system, and $0.00 licensing cost. Sharepoint licensing costs between $25,000 – $150,000 .

Prerequisites

  • Fedora, Centos or RHEL server. This guide uses Centos Stream 10. Fedora, Centos, and RHEL servers are interchangeable.
  • A user account with sudo privileges on the server.
  • Command line competency.
  • Docker experience is useful but not required.

Step 1: Update the server.

$ sudo dnf update; sudo dnf upgrade -y

Step 2: Install Podman

$ sudo dnf install podman

This tutorial uses podman > 5.3. Ensure a compatible version is installed with

$ podman -v
podman version 5.4

Step 3: Pull MediaWiki container from Docker hub.

$ podman pull docker.io/mediawiki:lts
Trying to pull docker.io/library/mediawiki:latest...
Getting image source signatures
Copying blob 46506c43b76b done   |  
Copying blob 7cf63256a31a done   |  
Copying blob f7e553522295 done   |  
Copying blob ccca7c183c0b done   |  
Copying blob 41bfba87aa2a done   |  
Copying blob 904933496485 done   |  
Copying blob 7906c5c5b56e done   |  
Copying blob 5f93253b2de6 done   |  
Copying blob 0230624a769b done   |  
Copying blob 6beeb76481f6 done   |  
Copying blob 012fd53ee67a done   |  
Copying blob a3339e6f62b1 done   |  
Copying blob b1eb0357bfab done   |  
Copying blob 4f4fb700ef54 done   |  
Copying blob 675bda9db3e3 done   |  
Copying blob c0a6d25b98b0 done   |  
Copying blob 417fd4c91734 done   |  
Copying blob 3c7453788306 done   |  
Copying blob ce01c7644913 done   |  
Copying blob ebe12d15cfb9 done   |  
Copying blob 35fe3b70b606 done   |  
Copying config 346df66094 done   |  
Writing manifest to image destination
346df660949efc448741705767a5db05a290e9d870c354ae93edc0e0291f7f03

Step 4: Pull Mariadb container from Docker hub.

$ podman pull docker.io/mariadb:lts
Trying to pull docker.io/library/mariadb:latest...
Getting image source signatures
Copying blob 597f7afe50fe done   |  
Copying blob 5a7813e071bf done   |  
Copying blob 5db80086e4da done   |  
Copying blob 901fe9394c00 done   |  
Copying blob 43eb19e1b102 done   |  
Copying blob bdecd990c29c done   |  
Copying blob e1dede558384 done   |  
Copying blob 5c3a22df929b done   |  
Copying config a914eff5d2 done   |  
Writing manifest to image destination
a914eff5d2eb4c650a4e787e453d52a4ffba977632bd624cc6e63d0c9c4c2d65

Step 5: Run Mariadb and Mediawiki containers in a pod.

5.1. Create a pod

A pod is a group of containers which share the same network, and/or resources.

Create a pod named wikipod:

$  podman pod create -n wikipod -p 8080:80

5.2. Run mariadb in wikipod

$  podman run --detach --name mariadb --env MARIADB_ROOT_PASSWORD=mediawiki --pod wikipod mariadb:lts

5.3. Run mediawiki in wikipod

$  podman run --detach --name mediawiki --pod wikipod mediawiki:lts

5.4 Check mariadb, mediawiki containers are running.

$  podman ps
CONTAINER ID  IMAGE                                    COMMAND               CREATED         STATUS         PORTS                           NAMES
936f315a87be  localhost/podman-pause:5.4.0-1739318400                        52 seconds ago  Up 31 seconds  0.0.0.0:8080->80/tcp            752fc149d017-infra
2a4ada8a898d  docker.io/library/mediawiki:lts          apache2-foregroun...  30 seconds ago  Up 31 seconds  0.0.0.0:8080->80/tcp            mediawiki
7d1c7a41ea83  docker.io/library/mariadb:lts            mariadbd              14 seconds ago  Up 15 seconds  0.0.0.0:8080->80/tcp, 3306/tcp  mariadb

Voila! Mariadb and mediawiki containers are up and running.

Step 6: Complete Mediawiki installation.

6.1 Go to http://<Your-Server-IP>:8080.

Server IP for this installation is 127.0.0.1.

Click complete the installation, and follow the prompt to configure mariadb.

6.2 Follow the prompt to configure the database.

6.3. Fill in database credentials.

The credentials used for this demonstration installation are as follows (adjust this to suit your needs):

database usernameroot
database passwordmediawiki
database namewiki
database host127.0.0.1
wikimedia admin usernameadmin
wikimedia admin passwordfedora magazine
wiki namefedora magazine

6.4 Database setup complete.

Once database setup is complete, click continue to complete mediawiki installation

6.5 Mediawiki setup complete.

At this point we have accomplished the following:

  1. Mariadb, successful configuration for mediawiki
  2. Mediawiki, successful installation.

Now the LocalSettings.php file must be copied to the directory where mediawiki is installed.

7. Copy LocalSettings.php to mediawiki installation directory.

7.1. Get mediawiki container id.

The mediawiki container id for this installation is 2a4ada8a898d. For your reference, go to section 5.4, above.

7.2 Copy LocalSettings.php into mediawiki container using podman.

$  podman cp ~/Downloads/LocalSettings.php 2a4ada8a898d:/var/www/html

8. Log in to mediawiki.

Proposal to move Community Blog to Fedora Discussion

Posted by Fedora Community Blog on 2025-03-06 10:00:00 UTC

Hello readers of community blog!

Recently on community blog round table meeting we had an interesting conversation about the future of community blog and we would like to hear your feedback on that discussion.

What is this change about? We would like to move from your wordpress instance to discussions.fedoraproject.org as new category with new team of curators. Why this change you ask? Here is the list of improvements that this would bring us:

  • Simpler editor workflow – this could potentially help us to get more editors for community blog and the actual reviews should be faster
  • One place to read the blog posts – we are currently forwarding all the blog posts to discussions anyway and this is also used to enable comments for blog posts
  • One less service to maintain – we are already maintaining and using discussions as Fedora and maintaining wordpress instance on top of that for community blog is more work that is not adding much

Here is the space for your feedback. Do you think this is a good idea? What would you miss on discussions.fedoraproject.org compared to wordpress instance. Please let us know in comments.

If you want to be a potential curator for new community blog let us know in the comments as well.

The post Proposal to move Community Blog to Fedora Discussion appeared first on Fedora Community Blog.

New badge: Fedora+CentOS Classroom at SCALE 22x !

Posted by Fedora Badges on 2025-03-05 15:42:22 UTC
Fedora+CentOS Classroom at SCALE 22xYou attended the Fedora+CentOS Classroom at SCALE 22x.

4 cool new projects to try in Copr for March 2025

Posted by Fedora Magazine on 2025-03-05 08:00:00 UTC

This article series takes a closer look at interesting projects that recently landed in Copr.

Copr is a build-system for anyone in the Fedora community. It hosts thousands of projects with a wide variety of purposes, targeting diverse groups of users. Some of them should never be installed by anyone, some are already transitioning into the official Fedora repositories, and others fall somewhere in between. Copr allows you to install third-party software not found in the standard Fedora repositories, try nightly versions of your dependencies, use patched builds of your favourite tools to support some non-standard use-cases, and experiment freely.

If you don’t know how to enable a repository or if you are concerned about whether is it safe to use Copr, please consult the project documentation.

Spotify Qt

Spotify-qt is an unofficial lightweight Spotify client developed in Qt, intended as a faster, smaller alternative to the official Spotify application. Actual playback requires another Spotify client running in the background (for example librespot), which can be easily configured within the app. Note that controlling playback requires Spotify Premium.

Key features:

  • Low resource consumption
  • Highly customizable
  • Multiplatform support

For more detailed information, see the FAQ. For instance, you can find there a step-by-step guide on configuring your own Spotify application in the Spotify Dashboard.

Showcase of spotify-qt client

Installation instructions

The repo currently provides spotify-qt for Fedora 40, 41, 42, and Fedora Rawhide. To install it, use these commands:

sudo dnf copr enable kraxarn/spotify-qt
sudo dnf install spotify-qt

Ghostty

Ghostty is a terminal emulator that wants to balance speed, rich functionality, and provide a native and friendly user interface. While many terminal emulators choose between performance and features, ghostty aims to excel at both while providing a native look and feel.

Key features:

  • Supports multiple windows, tabs, and split views out of the box
  • GPU acceleration
  • Platform-native UI (on macOS and Linux)
Ghostty terminal showing fastfetch output

Installation instructions

The repo currently provides ghostty and ghostty-git (for those who want the latest build from the main branch) for Fedora 40, 41, 42, and Fedora Rawhide. To install it, use these commands:

sudo dnf copr enable pgdev/ghostty
sudo dnf install ghostty  # (or ghostty-git)

Zen Browser

Zen Browser centres its design around vertical tabs. This is a concept shared by browsers like Vivaldi, Brave, and especially Arc Browser. Zen Browser provides features like Split View, Zen Sidebar (a detachable sidebar for quick side-by-side browsing), and Zen Glance (for previewing a site without leaving your current page). You can also organize your tabs with “workspaces,” allowing you to separate personal-related and work-related contexts.

Key features:

  • Strong privacy focus – blocks trackers, ads, and other unwanted content
  • Modern interface with focus on vertical tab management
  • Split View and detachable sidebar
  • Workspaces to keep tab groups organized
Zen Browser showcasing their Split View feature

Installation instructions

The repo currently provides zen-browser for Fedora 40, 41, 42, and Fedora Rawhide. To install it, use these commands:

sudo dnf copr enable sneexy/zen-browser
sudo dnf install zen-browser

LACT

LACT is a powerful tool for advanced control and monitoring of AMD, Nvidia, and Intel GPUs on Linux. It allows you to view detailed information about your GPU, monitor performance and thermal data, configure power limits, customize fan curves, and even overclock GPU and VRAM clocks if supported by your driver. LACT does not rely on X11 extensions, so it should work in any desktop session environment.

Key features:

  • GPU information display and monitoring
  • Power limit configuration, fan curve customization
  • Overclocking

To check whether your hardware is supported and how to configure LACT properly, please take a look at the documentation.

Overclocking in LACT

Installation instructions

The repo currently provides lact for standard installation, lact-headless for a setup without GUI, and lact-libadwaita for GUI built with Libadwaita, all for Fedora 40, 41, 42, and Fedora Rawhide. To install it, use these commands:

sudo dnf copr enable ilyaz/LACT
sudo dnf install lact  # (or with -headless or -libadwaita)

and then enable the service

sudo systemctl enable --now lactd

Meet the Fedora Project Mentors for Google Summer of Code 2025

Posted by Fedora Magazine on 2025-03-04 17:29:38 UTC

The Fedora Project is excited to participate in Google Summer of Code (GSoC) 2025. This continues our commitment to fostering open-source contributions and mentoring new contributors. As an organization deeply rooted in collaboration and innovation, Fedora provides an excellent environment for aspiring developers to engage with real-world projects and make a lasting impact on the open-source community.

Our mentors are experienced contributors who are passionate about guiding students through their GSoC journey and making them successful open-source contributors. They provide technical expertise, project insights, and invaluable mentorship to ensure a successful learning experience. Here’s in a nutshell about our wonderful mentors for this time!

Meet the Mentors:

Huzaifa Sidhpurwala

Huzaifa is mentoring AI-Powered Log Triage and Security Alert Aggregator for Fedora and in his own words:

Huzaifa Sidhpurwala is a Senior Principal Product Security Engineer, currently serving on Red Hat’s Product Security AI team. With over 15 years of experience in open-source security, he has played a pivotal role in safeguarding critical projects and collaborating with various open-source communities. His current focus involves advancing AI security, safety, and trustworthiness—ensuring cutting-edge technologies are developed and deployed responsibly.

He has been contributing to the Fedora project in various capacities for over a decade now, including leading and working with the fedora security team for some time. Beyond his professional responsibilities, Huzaifa actively pursues personal projects that harness AI for real-world applications, underscoring his belief in the transformative potential of emerging technologies. Whether developing practical solutions or sharing his knowledge through mentorship, he strives to foster a culture of innovation and collaboration. His expertise extends across areas such as threat modeling, security testing, and building trustworthy AI systems, making him a valuable resource for aspiring professionals. Driven by curiosity and a passion for continuous learning, Huzaifa remains committed to elevating the standards of security in the rapidly evolving AI landscape.

Frantisek Lachman

Frantisek is returning for another round in Google Summer Of Code as a Fedora Project Mentor and will be mentoring Create a service to get a new project to Fedora more easily . Below is some context to packit and himself.

Created in 2020, Packit is being largely used in Fedora Project with GitHub to this day! An interface with GitLab is needed. In his own words:

I am František. I work as a Product Owner for the Packit team in Red Hat. This means I am the one driving long-term efforts and facilitating discussions between our users, other teams and my fellow teammates.And what is Packit? This project aims to get developers and Linux distributions (mainly Fedora) closer together. We do this by providing GitHub/GitLab CI and also automation for various tasks that need to be done before the released code finds its way to the user.I’ve been in Red Hat for 7 years. Starting with a bachelor’s thesis student, then as an intern, part-timer and lately as a full-time employee. I have always been interested in various automation efforts, git and all the things we can automate or make smoother for people.Otherwise, I’ve spent a couple of years at Brno Masaryk University teaching basic Python courses and Software Engineering classes.Outside of the Open Source world, I lead an organisation team of one Czech scout course where I focus on pedagogy and non-formal education. I also drive a tandem bike and like to spend time outside.

About mentoring

Each mentor brings unique skills and insights, covering a range of Fedora-related projects, from system enhancements to cutting-edge technologies. GSoC contributors will have the opportunity to work closely with these mentors. They will be learning best practices in open-source development and making meaningful contributions to Fedora and beyond!

Interested in GSoC?

We encourage prospective students to explore Fedora’s GSoC project ideas and connect with our mentors to discuss their interests. Stay tuned for updates, and join us in making GSoC 2025 a successful and rewarding experience!

For more details on Fedora’s participation in GSoC 2025, visit: https://docs.fedoraproject.org/en-US/mentored-projects/gsoc/2025/ideas/#_idea_list

Packit as Fedora dist-git CI: Phase 1 completed

Posted by Fedora Community Blog on 2025-03-04 10:00:00 UTC

Hello Fedora Community,

We are excited to share an update on the Packit as Fedora dist-git CI change proposal. This initiative aims to transition Fedora dist-git CI to a Packit-based solution, deprecating Fedora CI and Fedora Zuul Tenant. The change affects the triggering and reporting mechanism for tests but does not alter the tests themselves or the test execution service (Testing Farm). The transition will be gradual, allowing maintainers to try the integration out, provide feedback and catch issues early. You can read more about the benefits and why we are doing this in the proposal.

What we have and how to use it

As part of the first phase, we have implemented scratch builds for Fedora dist-git PRs. This feature is currently opt-in, and maintainers can enable it by adding their projects to our configuration here by creating a pull request. This is a short-term solution during development; the configuration mechanism won’t be needed in the final phase since the new solution will be used by default. If you maintain a package in Fedora dist-git and want to be included in the Packit as a dist-git CI development, simply add your project to the linked configuration. You can see an example of how it looks for an enabled project in this PR, and reporting example directly in this screenshot:

Example of a commit flag in src.fedoraproject.org pull request

Providing feedback & asking questions

We welcome feedback and questions! For bugs or feature requests, please use this issue tracker. For ideas or suggestions to discuss, feel free to add a discussion topic here. And for any other questions, join us in the #packit:fedora.im channel on Matrix.

What’s next?

In the next phase, we will work on installability checks. We will announce updates in the same way once it is complete. 

Recap of the plan 

  • Phase 1 (Completed): Introduce scratch builds for Fedora dist-git PRs (opt-in).
  • Phase 2 (Next step): Implement installability checks (opt-in).
  • Phase 3: Implement support for user-defined TMT tests (opt-in). 
  • Final Phase: Transition to the new Packit-based CI as the default mechanism, replacing Fedora CI.

You can also check our tasklist in this issue.

We appreciate your support and look forward to your feedback!

Best, the Packit team

The post Packit as Fedora dist-git CI: Phase 1 completed appeared first on Fedora Community Blog.

New badge: Chemnitzer Linux-Tage 2025 !

Posted by Fedora Badges on 2025-03-02 18:41:13 UTC
Chemnitzer Linux-Tage 2025Thanks for stopping by the Fedora booth at Chemnitzer Linux-Tage

Contribute at the Fedora Linux 42 i18n Test Week

Posted by Fedora Magazine on 2025-03-02 08:00:00 UTC

The i18n team is testing changes for Fedora Linux 42 (ibus-libpinyin 1.16, IBus 1.5.32, and many more). As a result, the i18n and QA teams organized a test week from Tuesday, March 04, 2025, to Monday, March 10, 2025. The wiki page in this article contains links to the test images you’ll need to participate. Please continue reading for details.

How does a test week work?

A test week is an event where anyone can help ensure changes in Fedora Linux work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to do the following things:

  • Download test materials, which include some large files
  • Read and follow directions step by step

The wiki page for the i18n test week is your source of information on what and how to test. After you’ve done some testing, you can log your results in the test week web application. If you’re available on or around the days of the event, please do some testing and report your results.

Happy testing, and we hope to see you on one of the test days.