Skip to main content
Nintex Community Menu Bar

I'm currently working on a large BPM project at work which uses the Global 360 BPM tool-set called Process 360. Just to give some background; this product works like a lot of other BPM solutions in that you design multiple "process maps" which define the flow of a particular business process you're trying to model, and each process map consists of multiple task nodes connected together which perform particular functions (calling web-services etc).

Currently we're experiencing some pretty serious issues during QA phases of our releases because there isn't any way provided by the tool-set to automate testing of the process map routes. So when a large and complex process is developed and handed over to our test team there are often a large number of issues which crop up. While obviously you'd expect some issues to come out of QA, I can't help the feeling that a lot of the bugs etc could have been spotted during development if we had some sort of automated testing framework which we could use to build up a set of unit tests which proved the various routes in the process map(s).

At the moment the only real development testing that occurs is more akin to functional testing performed by the developers which is documented as a set of manual steps per test-case. The problem with this approach is that it's very time consuming for the developers to run manually, and because of this, is also relatively prone to error. Also; because we're usually on a pretty tight schedule, the tests are often not executed often enough to spot issues early.

As I mentioned earlier; there isn't a way provided by the current tool-set to perform this sort of automated testing. Which actually got me thinking why? Being very new to the whole BPM scene my assumption was that this was just a feature lacking in the product, but I also wonder whether "unit testing" just isn't done in the BPM world traditionally? Perhaps it just isn't suited well to this sort of work?

I'd be interested to know if anyone else has ever encountered these sorts of issues, and also what - if anything - can be done to improve things.

Hi @FranCruzarianic 



I would agree, the lack of automated testing in BPM Software can be a set back. Typically on large projects related to K2 software we can, and often do find ways to supplement to make up for a lack of automated testing. Here are a few common practices that come to mind:





  •  Building Parent/Child Processes and SmartForms in a fashion that accommodates easier unit testing, i.e. being able to unit test the smallest piece by it self and then when required unit test the small piece in conjunction with the rest of the solution.


  • Managing a backlog of bugs like in Azure DevOps. This gives you the ability to assign Priority and Severity to a bug. This way you can manage and track the bugs that are critical to fix before the next release, and worry about the low severity bugs later.


  • Defining a set of "Happy Paths" that the developer should test after each major change.


  • Simply making time in the QA phase for more testing and bug fixing cycles. If the team is commonly needing more time for finding bugs and fixing them, I would imagine it be a good return on investment to plan for an extended QA Phase. (of course this is easier said then done, but if bugs being discovered in prod is a recurring problem then it should be an easy sell.)


  • Reduce the complexity of the solution. Try to take on a more Object Oriented Programming approach. (can also be easier said then done)


  • Try implementing an Automated Testing Platform like Selenium or Microsoft Coded UI. I heard of several success implementations of these programs on projects before.




 



I hope a few of these help. 


Reply