- 24 Mar 2025
- 4 minute read
- Print
- DarkLight
- PDF
Troubleshooting Tips for Database Health
- Updated 24 Mar 2025
- 4 minute read
- Print
- DarkLight
- PDF
Slate is powerful, but with that power comes the ability to create conditions that lead to errors or reduced speed. This article shows you the tools Slate provides to alleviate these issues.
📖 Related reading: If you aren’t currently experiencing slowness or errors, but want to learn how to prevent them, see Proactive Maintenance for Database Health. See also the full suite of diagnostic tools.
Check your rules
To check your rules for slowdowns or errors:
Go to Database → Rules → Rules Health to see which kinds of rules (that is, which rule bases) are taking longest to complete.
Select Check Rules to see which rules are running slowly (or not at all).
If any of your rules have a Count of
ERR
, select Rule Log to view the errors.Make adjustments to your rules following rules best practices.
Wait at least 15 minutes. If the issue in Rules Health isn’t alleviated, repeat with another rule with an
ERR
status.
Example: Rules in the queue over 100 hours
In Rules Health, we find a number of person and application records have been queued for over 100 hours:
Moving to the Check Rules tool, we find a number of rules taking too long to run, including three rules with a Count of ERR
, meaning they failed to complete:
Moving to the Rules Log, we find it logs an error pointing to a specific rule.
We select the rule’s GUID to open its overview page.
To alleviate the current backup in the rules queue, we can either adjust the rule’s filters in accordance with rules best practices, or remove the rule from the queue entirely by setting its status to Inactive.
If, after 15 minutes, this doesn’t resolve the backup we identified in Rules Health, we apply this same practice to the other rules that were returning errors and failing to run.
Check the Job Activity Monitor
The Job Activity Monitor gives you a history of all query and report executions, including scheduled exports.
To use the Job Activity Monitor:
Go to Database → Job Activity Monitor.
Select any rows with a status of
failure
.Ensure the scheduled exports follow the recommendations in Scheduling Exports and Report and Query Timeout Errors.
Example: SIS data export failure
The Job Activity Monitor contains a pattern of exports that return a status of failure
; in this case, a nightly export to our Student Information System fails to run overnight:
Our next step is to review the scheduled export settings for the failing job.
We also explore the associated query or report to determine if any adjustments need to be made.
Select the Source link to open the related query or report in question:
Repeat for each job with a pattern of failed statuses.
Review data imports
Review your data imports in Database → Sources / Upload Dataset to see if any files have long load runtimes.
To keep data imports from causing slowdowns:
Follow the best practices laid out in Importing Data.
Schedule data imports for low-usage windows of time.
Break down large files into multiple, smaller imports.
Example: Large import during business hours
The Sources / Upload Dataset tool shows us an import containing over a hundred thousand rows, which has been occurring daily for the past five days.
Large files like this being uploaded regularly can be separated into multiple, separate uploads to reduce database strain and improve consistency.
Selecting the most recent upload gives us more information, including the time the file was uploaded: this upload occurred in the morning, during business hours, which can impact other database activities:
Ideally, this file should upload in another window, outside of standard business hours.
Selecting the source format link shows us the full list of sources for this format, and a preview of the settings this source format is using:
Selecting Edit shows us the source format’s settings.
We find that a large number of rows in the import, combined with the Disable Update Queue setting of Allow records to enter update queue upon import is a likely candidate for causing a database to run slower as it processes these changes.
Further troubleshooting
If checking your rules, the Job Activity Monitor, and your imports didn’t help, try these additional steps:
Review the Error Log
Review any database errors in Database → Error Log. Here you’ll find error messages returned from particular methods and pages within Slate.
Use the Definition of Common Errors to understand how to further troubleshoot these errors.
Review the System Dashboard
Go to Database → System Dashboard to understand overall database performance and usage. This tool can be used in tandem when scheduling data imports for low usage times.
Consider your data and record management
Merge duplicate records according to Consolidate Records best practices.
Remove extraneous data and records with the Retention Policy Editor.
Update to Configurable Joins
Review and update objects in Slate to use Configurable Joins whenever possible.
Older objects built on local or Slate Template Library bases may have worked well in the past, but are no longer the current supported standard.
Reach out to the Slate Community for help
Not sure what an error message means? Run out of next steps to take? Make a post in the Community Forums!
To help others understand your issue, include in your post:
redacted screenshots
error messages and other Slate context
troubleshooting steps you’ve already taken