Start with smaller filters going back to 2013 and deleting by every 3 months. Your current filter is still very large and times out.
To avoid situations like this in the future, place an info management policy on the hidden parent content type "Workflow History" at the site collection level to move to recycle bin after so many days/months. This allows the management to become automated.
Thanks Bryan,
I have done this and while it removed some entries, there is still over 30000 remaining. As the screen snip indicates, this entry and many more are still there from mid 2013. They just wont budge.
Cheers,
Mark
Have you tried to use Powershell to purge this list ?
Aaron has written a blog on that :
How to purge items from a large history list safely via PowerShell
Note that it can take time to delete all the items.
Hope this helps
Hi Caroline,
Yes, I have run the powershell in Aaron's post. Yes it was incredibly slow - I kicked it off on Friday and it was still going on Monday morning! While this did knock a few of the items off, I'm still intrigued why some of the older entries still exist in the list. I have actually raised a support ticket, so hopefully I can get to the bottom of it.
Thanks,
Mark
OK, maybe you should also empty the site collection recycle bin as I think that the Powershell script send the deleted items to the recycle bin.
Hope you will achieve to delete the items
Have you got an answer back on this.
I too am seeing the same results when using PowerShell.
Cory and all. You are seeing an issue due to the number of items that are present. What Caroline suggested is the way to go. It will take a while and there are a few blogs from SharePoint land on this. I would attempt smaller batches if possible and just plan to have this done over a period of time.
Another option would be to create a scheduled task on the server that executes the PowerShell script after business hours and keep the max # of records underneath 5k to start. Until you get the items under a manageable number it will run slow.
Hope that helps.
I am running a powershell script over night (actually several nights).
We have NintexWorkflowHistory lists with well over 100 million rows x 2,000+ lists across the farm.
My scripts look similar to:
See attached (couldn't get the formatting in here)
The question I have is similar to the above... Even after allowing this script to fully finish - I still have workflows that exceed 180 days (or the set timeframe). Also... I am curious about the statement, all nwadmin does is move the workflows to the recycle bin - is this true?
Hi Cory,
I still have the issue. No amount of script running appears to clear the entries from the NintexWorkflowHistory tables. The Nintex support re-affirmed the action to take is to run the scripts. Below is an extract from the steps suggested:
<snip>
This is a guide I wrote for general workflows maintenance and purging history list data and workflow data:
- 1. Locate one Web Front End server which has Web Application service running, run the following PowerShell command to copy workflow-related configuration from the web.config to the configuration database so it will be available from every server in the Farm.
• $webapp = Get-SPWebApplication -identity http://<web app name>
• $webapp.UpdateWorkflowConfigurationSetttings()
From <https://support.microsoft.com/en-gb/kb/2674684>
- 2. Synchronize Nintex database with the MS Workflow Foundation database
- NWAdmin.exe -o SyncTerminatedWorkflows -url “http.//url” -verbose -showMissingItems –terminateDeletedItems
- 3. Purge data from Workflow History List or create a new History List
To purge data manually please see point 3a and 3b. To create new Nintex Workflows History list go to the SharePoint site, open Site Settings, Workflow History List and then just create a new one or purge the items from the existing. There should be not more than 15-10k items for the best performance. If you decide to create a new history List, you will then have to open the workflow in a Workflows Designer and in Workflow Settings select the new history list and publish the workflow - you can leave the old workflows history as it is in this scenario.
3a. Purge Workflow History
- NWAdmin.exe -o purgeHistoryListData -siteUrl “http:URL” -verbose -batchSize 500 -state completed
Available State: completed/cancelled/errored/all
3b. Purge Workflow Data
- NWAdmin.exe -o PurgeWorkflowData -url http://url -state completed -verbose
Available State: completed/cancelled/errored/all
If this command times out or errors, we have a -timeout switch. You can also try to filter the condition with multiple switches like -listname, -lastactivitybefore, etc. The guide: https://community.nintex.com/docs/DOC-1218
Please find useful references: https://community.nintex.com/community/build-your-own/blog/2014/10/07/demystifying-workflow-history-part-1
<snip>
I have started implementing new History Files for each workflow to mitigate the issue, but the majority of my entries still exist in the original file - they wont budge.
Regards,
Mark
Thanks Mark...
Apparently the Nintex folks fail to appreciate the impact these have on a LARGE farm (40+) servers handling over 10,000 users all with their own idea of what a workflow is.
This is what I thought....
thanks for the update and the guide...
I'll get started on finding the impact via our test farm.
Cory,
Thankfully we are a small user base with a single server, so I feel your pain!
Cheers,
Mark
Cory/Mark,
I have witnessed some similar issues with a large farm as well. I did make an assumption that you were deleting items from the Nintex database. Is this correct?
Also, I have a few questions that may help out here:
- Are you performing any backup/purge operations during off-peak hours and still not seeing any results?
- How many Nintex database do you have in your farm?
- Have you looked at/run the Know your workflow program?
- Do you have any maintenance plan to keep the Nintex Workflow history table from growing?
- Do you know if any training has been provided for site collection admins to help manage the workflows running within their site collections?
If you are purging workflows and its not making a dent in the total number, you have another issue and then need to identify the workflows that may be causing the databases to grow when run. For example, I had a workflow in one farm that when it ran, it created 68,000 rows in the database with one run. It took about 11 minutes to complete, and was used heavily during the end of the month. So you can imagine the scare I saw when I would do a purging and then the numbers would jump back up seemingly overnight.
To answer your questions:
1. Yes, I have been running (the included sample) as well as some one off attempts sporadically for several weeks now. To say we haven't made a dent wouldn't be accurate. But, for example, one workflow history had over 130 million NintexWorkflowHistory. I dropped about 37 million out of it - which is impressive - but... If I look at the list via the UI, I still see old workflows from 2012. So the Purge WorkflowHistory doesn't seem to get those and I don't know why.
2. Nintex DBs - off the top of my head about 20
3. I believe another admin has been looking at this "know your workflow" - however, like many other tools, the items required to put that all together timeout when executing - if I remember right.
4. No... This is the problem I am trying to fix and then permanently correct.
5. We do provide training however... We ( a team of 5) are the only allowed site collection administrators. The highest privilege we offer users is "Full Control". Despite numerous articles and classes to address Nintex and how to use if properly, you can imagine with a user base of over 10,000 - everyone has the "I'm the most important" type of thought process and thus it only takes a few to overload the systems.
Cory,
Sorry to hear that. I know what you mean. I worked in an environment with 115k users and it was a nightmare to try and get a handle on. Speaking of, keep an eye out on Nintex Connect "Live Chat" - Google+ which is happening this Wednesday. See Unanswered Questions for more details.
Another option would be honestly to identify your larger databases or the ones that should mirror the site collections in question. Dismount them and create another one. This should give you some more flexibility to offload a huge amount of stress from the system.
Ideally if your farm is that large, I would look at structuring the Nintex databases to align with the processes or sites/list that are creating large amounts of data. This may seem like overkill, but it may help when you need to address a particular user. Their site will be the only one affected and you can encourage/train them to use the workflow or their history list better.
Just an option.
We actually have taken that route... Creating a single NW2XXXDB per ContentDB (or major site organization).
Something we found out is Nintex does not honor the db to db assignments.
It seems Nintex picks (randomly) what workflow DB to use - even if we have it directly mapped to a specific contentDB.
That's been our experience with this concept.
Which, in a lot of ways, has made the situation even worse because now we have NW DBs with any number of workflows running in them - with no rhyme or reason...
Cory,
That sounds like something to look into. I've done it several times and it has worked for me without issues, but that doesn't mean that it always works. I'd be curious to see how we can get you some help on that.
Do you have a partner of record for this and/or used support to assist? This would be slightly outside the scope of what can be provided through the community.
Hi Eric,
Sorry for my delayed response - I have been laid low by a bug for the last week.
I was curious about your assumption that we were deleting items from the Nintex database. Could you please explain this a bit more? I have simply been following the procedures outlined and not actually touching the database tables directly in any way - my assumption is that the detailed processes should be cleaning up the database as they are actioned through the powershell scripts. Is this not the case? If so can you please elaborate on what needs to be done now and into the future as far as maintenance plans go?
We are only a small shop and have but one Nintex database. While the scale is very different to Cory's, the issue appears very similar.
Regards,
Mark
We are also having similar problems. We have a 3-tier farm and only a few WF history lists are large (around 200,000 items) but we are having trouble cleaning them out.
The only thing that has worked for us so far is to:
- find which workflows have large history lists (this post is awesome)
- for sites with large history lists, create new WF History Lists and change all workflows to use the new history lists
- increase the database timeout value (I tried from 15 to 600) (stsadm -o setproperty -pn database-connection-timeout -pv 600) (don't forget to change it back later)
- purge by workflowname (see powershell code below)
- for any that remain, go into the Workflow Settings for the list the workflows are running from and remove all the old instances (I'm sure there must be a better way to do this but I was frustrated by this point)
Hopefully someone from Nintex can advise of a better way.
function PurgeHistoryListData( $w, $n )
{
$ws = $w.Webs;
foreach( $subweb in $ws)
{
PurgeHistoryListData($subweb)
}
Write-Host 'Purging Nintex Workflow History List Data on site : '$w.Url
& 'nwadmin.exe' -o PurgeHistoryListData -siteUrl $w.Url -workflowName $n -days 30 -silent
}
$web = get-spweb <http://yoursite>
$nameOfWF = "Name of Your Workflow"
Write-Host 'Purging Nintex Workflow History List Data on site : '$web.Url
PurgeHistoryListData($web, $nameOfWF)
Hi Robyn,
Thanks for the heads up on the post - I haven't seen that one!
Looks like this may be the most relevant extract from the post:
---WARNING---
Only preform a dbo.WorkflowProgress clean up AFTER you have purged data from your Nintex workflow history lists. Not doing so will prevent you from purging items from the history list using the "PurgeHistoryListData" command unless the "-clearall" switch is used.
Perhaps this is the issue for us.
Cheers,
Mark
Here is another "Large List" reporting tool - that includes URLs and other columns...
I struggled with that at the above linked PoSH
I am pushing this idea through our support staff...
I am not familiar with PowerShelling a Information Management Policy...
But I hacked together this script...
Anyone have an idea to push a policy like this across the entire farm?
was there a defined outcome/solution to this thread?
Hi Cassy,
Sadly I did not reach a definitive solution. I also have been hard at other non-sharepoint/workflow activity, so have not been in a position to work through the issue ( or participate in the community! )
Cheers,
Mark
I've noticed that there are a couple of things going on here. You might have (1) history list data for list items that no longer exist or (2) history list data for workflows where you've purged the workflow data. I've been experimenting with the attached PowerShell script to try to identify list item workflows where the history data can be purged using the NW Admin Purge History List Data command.