The reason I ask this question is that the lab I'm applying PhD asked this question to me. I'm not familar with linux pipeline so can anyone tell me what should I consider if I want to integrate a testing for compiler into a linux build pipeline? Thanks.
I'm not a dev, but the dev guys here use Azure DevOps.
We have a process, we check in code to github. (A private repo).
DevOps grabs our code, then runs some "make" scripts to build the binaries.
The "build sevrver is actually a VM in the Azure cloud that has an agent with
permissions to download code. After the binary is built, is installed (automatically)
to a test VM. The application then has a number of test scripts running. The hardest part of this
is knowing the requirements for the application. Did it get installed correctly?
In the right place? With the right permissions? Does the service start automatically?
Shutdown automatically? Does upgrading to a newer version over-write existing config files?
Does the app do the things it's supposed to do? Did it send the messages? In the right format?
To the right place? If not, did it report any errors? Where? Did the credential work?
When I delete the (rpm's in our case) does it delete everything it should?
There could be dozens of test scripts depending what your app does. These are mostly localize unit tests. This generates a report, that tells which things passed, and which things (if any) failed.
Assuming everything passed. This gets sent to an rpmbuilder, with pre-made spec files
for RPMs. The RPM artifacts are then uploaded to a ym/dnf repository where they can
be downloaded onto our Lab UAT testing systems for long term testing, and end to end
integration testing. Does it break anything else when it installs? Will it run for two weeks without
breaking anything? What dependencies does it require? (which java, which python, etc..)
We do use Jenkins, and Ansible for parts of this, but most of the unit testing happens
in the DevOps cloud vm's.