Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Test-driven kernel releases

Test-driven kernel releases

Upstream Linux kernel testing has grown exponentially on many fronts during the past few years: kselftest is now more stable, KUnit gaining coverage and many out-of-tree test suites have kept growing. Many automated systems are running those tests continuously and regzbot has now become a central place for tracking regressions with weekly reports for mainline.

Despite these monumental achievements, mainline and stable releases still happen entirely at their maintainers’ discretion, in the absence of any known blocking regression. Linux kernel development has worked for many years while relying on subsystem maintainers’ testing tools and best-effort test reports. But in the same way that open-source has brought contributors together with a single code base, testing can bring kernel users together. Rather than having to rely heavily on downstream testing, we could bring some of the real-world quality control upstream too.

The aim of this talk is to raise a number of critical questions: What would it take to gate releases on a set of passing test results, even basic ones to start with? Can the upstream kernel community ever make such a culture shift? Could release tags include certified test results as a meaningful quality measurement for its users?

Guillaume Tucker, Collabora

Kernel Recipes

June 09, 2024
Tweet

More Decks by Kernel Recipes

Other Decks in Technology

Transcript

  1. 4 Open Source Philosophy Single mainline code base Many contributors

    Many use-cases Application changes are sent upstream Reduced duplication of efforts mainline
  2. 5 Open Testing Philosophy Single mainline code base including tests

    Many contributors who run tests Test results sent upstream Test results summary in each release Reduced duplication of testing efforts mainline
  3. 6 Hidden Mass of Testing Hidden Mass of Testing Duplicated

    testing efforts Duplicated testing efforts No solution for tracking results upstream No solution for tracking results upstream Testing stays hidden as if it was downstream Testing stays hidden as if it was downstream
  4. 14 Focusing on the results Manual runs Maintainer scripts Automated

    systems Results are the least common denominator
  5. 15 Benefits of results in releases Valuable for users in

    general Canonical way to keep track of code quality Essentially, avoiding the “works for me” syndrome
  6. 16 Challenges Shift in workflow: results are needed before the

    release Similar to how -rc works for stable and mainline Expect positive results rather than solely look for regressions Additional step for maintainers Keeping it simple and not disruptive Optional Up to each maintainer to decide which results to include
  7. 18 Where to start? Results reproducible on any hardware Tests

    included in the kernel source tree Plain builds with reference toolchain binaries, Docker images Builds with sparse enabled make C=1 coccicheck KUnit Device tree validation
  8. 19 Where to start? Results reproducible on any hardware Tests

    included in the kernel source tree Plain builds with reference toolchain binaries, Docker images Builds with sparse enabled make C=1 coccicheck KUnit Device tree validation Documentation: https://docs.kernel.org/
  9. 20 RFC 1: Test results in-tree Similar to linux-next merge

    logs Updated for each release (stable, mainline, -next) Rely on Git history for older results Results ├── kselftest │ ├── futex │ └── lkdtm ├── KUnit │ └── results.json └── summary
  10. 22 RFC 3: Git meta-data Tied to Git history Separate

    from commit merge workflow Similar to Git notes git results show REVISION
  11. 23 Some thoughts Subsystem-specific results in separate location? Integration results

    for mainline / stable / linux-next Subsystem results could be pulled in alongside code Follow regular email workflow for adding results Keep in-tree result summaries in plain text Extra data can be hosted on separate systems
  12. 24 RFC: <your idea here> How does the concept sound?

    Has this been tried or discussed before? Does it seem worth the effort? Time for an RFC on LKML to go through some details?