¿ªÔÆÌåÓý

Locked Release functional testing


 

Bob J and the JMRI team:

Do documented release test scripts exist for testing pre-production releases??? In a past life I did this sort of work and would like to contribute with release testing.

I was thinking along the lines of repeatable testing scenarios/scripts which have defined test parameters, steps, and desired outputs?


 

There is a lot of testing going on every time we make any code update.
However the total coverage of the codebase is well less than 100% and we
keep trying to expand on it. It is more 'bottom up' testing, I mean the
individual modules of code have tests and it builds up from there. Extra
help improving the coverage would be great. If you review some of the items
in this page:

You will get the ideas with focus on the 'Unit Testing with Junit' and the
'Continuous Integration' parts, but the others in the T&S will make sense to
you I hope.

-Ken Cameron, Member JMRI Dev Team
www.jmri.org
www.fingerlakeslivesteamers.org
www.cnymod.com
www.syracusemodelrr.org


 

Hi Bob,

Aside from the CI testing,
I'd say the main priority in user testing is does it actually connect to the hardware?

As JMRI connects to so many different command stations and hardware types, that's one thing which is difficult to reproduce.
Can you actually drive a train / program a decoder?
While simulated connections are great, there's nothing like getting a proper bit of hardware connected.

I've been doing quick pre-release tests on Win7 + a Raspberry Pi for MERG CAN USB + MERG CAN Network connections,
using a current panel + profile, along with a virgin install, however only testing the bits I'm familiar with, eg. I don't even open up SoundPro or OperationsPro.

I've seen that some people test a profile ( + panels? ) created from the previous 2 major releases to check the upgrade process.

If you had any recommendations for this, I'm sure there'd be some interested folks,

Steve.


 

Steve's message about 'real hardware' is important. We as developers only
have some of the hardware in our hands. Identifying others with hardware
that the developers don't have is important. Getting some of those
individuals able to do specific testing for us, capturing logs, etc... is
important for use when getting new systems and hardware on line.

If somebody has 'some of the less known hardware' it is very helpful if they
are willing and able to check out the development releases to see if we
missed something and broke them or to try new features for those lesser
known systems.

-Ken Cameron, Member JMRI Dev Team
www.jmri.org
www.fingerlakeslivesteamers.org
www.cnymod.com
www.syracusemodelrr.org


 

That would be great!

The release script (in the theatrical sense, not in the computer-program sense) is here:
It¡¯s a bit messy, but we really do follow the steps.

There¡¯s a lot of automated testing (i.e. see and )

The testing with hardware comes in toward that end of the process, and it¡¯s a bit ad-hoc. After the (proposed) files are created and available at e.g. the release builder sends an email to the developers list and asks for feedback on the files. Historically, this was mostly to make sure that the installers would install OK on different kinds of machines (none of the people who usually do the releases are Windows native, for example). In that process, some people generously spend time testing on their hardware, but it¡¯s not really systematic.

It would be _great_ to get some additional hardware-based testing at that step.

It would also be good, and perhaps more practical, to get the test releases themselves used and reported back more systematically. They¡¯re part of a sequence, and it¡¯s really helpful to find problems that occur in that sequence as early as possible. Some people do send a note to i.e. jmriusers or by private email after they use it. That valuable and much appreciated. But it¡¯s not systematic, particularly in the less-popular systems and less popular connection methods (If we break LocoBuffer support, we hear it immediately; if we break an EasyDCC connection, it might be longer¡­). If people who use those less popular equipment would test at least a test release or two in each sequence, and we had a systematic way to get that feedback, it would help us keep problems from cascading.

Thanks for thinking about this!

Bob


On Dec 13, 2018, at 4:19 AM, Bob Morningstar <bobmorning@...> wrote:

Bob J and the JMRI team:

Do documented release test scripts exist for testing pre-production releases? In a past life I did this sort of work and would like to contribute with release testing.

I was thinking along the lines of repeatable testing scenarios/scripts which have defined test parameters, steps, and desired outputs?
--
Bob Jacobsen
rgj1927@...