Best practices for workflows with long durations?

  • 20 January 2009
  • 6 replies
  • 4 views

Badge +5

I have a workflow for credit requests from customers. The last stage of the workflow is a client event where the user has to confirm that the customer has made a claim (requests are raised when there's an issue, investigated and then only paid if the customer claims). At this stage the workflow is 'pending'. Customers can make a claim for anything that happened up to two years ago.


My question is that since there will be a lot of requests in the 'pending' stage (assume about 2000 process instances that complete, 500 left in pending state per year), what is the best way to implement this sort of workflow?


 I currently have one workflow process for the whole thing. This means there will be a lot of pending requests in the worklist and probably make K2 slow to a crawl.


 Alternatively I could have two workflows, whereby the first ends when the request goes into the pending state, and then the second one starts when the customer claims.


 The workflow is integrated with a Winforms app. The issue I have with the second option is that this would break down the abstraction with the workflow and mean the application will need more knowledge of the workflow process in order to deal with the two workflows. Currently I have a toolbar in the app which queries K2 to get the current actions for the user and builds a menu of their approval options, so the app doesn't need any knowledge of what the workflow is doing.


 I'm also wondering exactly what affect all the pending requests would have on K2 performance? I think it will be fine on the SQL side of things, so it's more the time it takes to retrieve the user's worklist/full worklist that I'm concerned with. I'd use the WorklistCriteria from the API to allow the user to only get pending items or non-pending items, so is that likely to get around any performance problems?


I can test all this by witing some code to kick off a few thousand process instances, but thought it more sensible to gather some advice on best practices first. :)


6 replies

Badge +9

So what is the total expected pending instances over time?  Would that be 500 pending per year for 2 years?


 That would translate to 1000 pending request in the worklist table which doesn't sound a lot.  I have seen customers who have worklist tables in the tens of thousands.  I also believe there should be customers in the US who have more workitems (with more serious SQL hardware).

Badge +5

Thanks for the reply johnny.


Yep that's per year. So if we assume a worst case of 3000 process instances per year that are never completed, that wouldn't cause a problem for blackpearl? Note that there will be other applications using blackpearl, so the overall load will be higher (although they won't have anywhere near as many process instances).


 At the moment even querying small worklists takes too long, but this I am sure is only due to our K2 development environment having a SQL server on a virtual machine.


If I go down the route of having the single workflow that can have requests left pending in the workflow, is there any way in blackpearl that I can have them automatically ended after two years (max window the customer can claim in)?

Badge +9

You can auto end the process or activity after 2 years by either expiring the activity or doing a GotoActivity to a Completed dummy activity.


 For the query performance, I guess you can only test this on a actual production hardware with the databases properly configured on a SAN.  This will allow you to pump in test processes and test the worklist access to see the limit at which your worklist performance gets impacted severely.


 But frankly, if you can adopt the other approach (i.e. the separate request and claims processes), this would definitely help your application to scale better.  If you leverage off SmartObjects to store the data and statuses, that might alleviate some of the complexity in keeping the state of the request/claim.  This will help keep this application a bit more behaved when coexisting with other new applications.

Badge +5
icon-quote.gifjohnny:

I have seen customers who have worklist tables in the tens of thousands. 


johnny, do you know what sort of spec hardware these customers have? We are going to be purchasing some new hardware for our blackpearl servers (blackpearl on one, SQL on the other), and are currently going with the specs in the blackpearl documentation.

Badge +9
If I recall correctly, it was a 2 K2 server cluster each server having 2 CPUs with 2GB ram,  The SQL cluster was a quad CPU machine with 4GB ram and has a dedicated SAN.  All K2 database files and transaction log files were located on separate LUNs on the SAN.  Note that this was like 3 years ago, it's common to see 2 quad core servers with 8GB ram nowadays.  Note that different applications behave differently and this can affect your hardware sizing.  Hope this helps.  Cheers.
Badge +3

I don't know anything about the performance of long-running workflows, but one thing to consider is updated versions of your workflow.  If you change your workflow, all the old process instances are not automatically upgraded.  If you make too many changes, it can be confusing which process each instance is following.  K2 provides an API, and the market has an application to migrate live processes from one version to another, but there are some limitations and considerations.

Reply