ADScanJob
|
Initiates the adcli process on the Management Server to scan the directory servers. Ensure the following:
|
CollectorJob
|
Initiates the collector process to pre-process raw audit events received from storage devices. The job applies exclude rules and heuristics to generate audit files to be sent to the Indexers. It also generates change-logs that are used for incremental scanning. |
ChangeLogJob
|
The CollectorJob generates changelog files containing list of changed paths, one per device, in the changelog folder. There cab be multiple files with different timestamps for each device. The ChangeLogJob merges all changelog files for a device. |
ScannerJob
|
Initiates the scanner process to scan the shares and site collections added to Data Insight.
Creates the scan database for each share that it scanned in the data\outbox folder.
|
IScannerJob
|
Intiates the incremental scan process for shares or site-collections for paths that have changed on those devices since the last scan. |
CreateWorkflowDBJob
|
Runs only on the Management Server. It creates the database containing the data for DLP Incident Management, Entitlement Review, and Ownership Confirmation workflows based on the input provided by users. |
DlpSensitiveFilesJob
|
Retrieves policies and sensitive file information from Data Loss Prevention (DLP). |
FileTransferJob
|
Transfers the files from the data\outbox folder from a node to the inbox folder of the appropriate node. |
FileTransferJob_content
|
Runs every 10 seconds on the Windows File Server.
Routes content file and CSQLite file to the assigned Classification Server.
|
FileTransferJob_Evt
|
Sends Data Insight events database from the worker node to the Management Server. |
FileTransferJob_WF
|
Transfers workflow files from Management Server to the Portal service. |
FileTransferJob_classify
|
Runs on all Data Insight nodes once every minute.
It distributes the classification events between Data Insight nodes.
|
IndexWriterJob
|
Runs on the Indexer node; initiates the idxwriter process to update the Indexer database with scan (incremental and full), tags, and audit data.
After this process runs, you can view newly added or deleted folders and recent access events on shares on the Management Console.
|
ActivityIndexJob
|
Runs on the Indexer node; It updates the activity index every time the index for a share or site collection is updated.
The Activity index is used to speed up the computation of ownership of data.
|
IndexCheckJob
|
Verifies the integrity of the index databases on an Indexer node. |
PingHeartBeatJob
|
Sends the heartbeat every minute from the worker node to the Data Insight Management Server. |
PingMonitorJob
|
Runs on the Management Server. It monitors the heartbeat from the worker nodes; sends notifications in case it does not get a heartbeat from the worker node. |
SystemMonitorJob
|
Runs on the worker nodes and on the Management Server. Monitors the CPU, memory, and disk space utilization at a scheduled interval. The process sends notifications to the user when the utilization exceeds a certain threshold value. |
DiscoverSharesJob
|
Discovers shares or site collections on the devices for which you have selected the Automatically discover and monitor shares on this filer check box when configuring the device in Data Insight |
ScanPauseResumeJob
|
Checks the changes to the pause and resume settings on the Data Insight servers, and accordingly pauses or resumes scans. |
DataRetentionJob
|
Enforces the data retention policies, which include archiving old index segments and deleting old segments, indexes for deleted objects, old system events, and old alerts. |
IndexVoldbJob
|
Runs on the Management Server and executes the command voldb.exe --index which consumes the device volume utilization information it receives from various Collector nodes. |
SendNodeInfoJob
|
Sends the node information, such as the operating system, and the Data Insight version running on the node to the Management Server. You can view this information on the Data Insight Server > Overview page of the Management Console. |
EmailAlertsJob
|
Runs on the Management Server and sends email notifications as configured in Data Insight.The email notifications pertain to events happening in the product, for example, a directory scan failure. You can view them on the Settings > System Overview page of the Management Console. |
LocalUsersScanJob
|
Runs on the Collector node that monitors configured file servers and SharePoint servers. In case of a Windows File Server that uses agent to monitor access events, it runs on the node on which the agent is installed.
It scans the local users and groups on the storage devices.
|
UpdateCustodiansJob
|
Runs on the Indexer node and updates the custodian information in the Data Insight configuration. |
CompactJob
|
Compresses the attic folder and err folders in <datadir>\collector , <datadir>\scanner , and <datadir>\indexer folders. The process uses the Windows compression feature to set the "compression" attribute for the folders.
The job also deletes stale data that's no longer being used.
|
Compact_Job_Report
|
Compresses the folders that store report output. |
StatsJob
|
On the Indexer node, it records index size statistics to lstats.db . The information is used to display the filer statistics on the Data Insight Management Console. |
MergeStatsJob
|
Rolls up (into hourly, daily and weekly periods) the published statistics. On the Collector nodes for Windows Filer Server, the job consolidates statistics from the filer nodes. |
StatsJob_Index_Size
|
Publishes statistics related to the size of the index. |
StatsJob_Latency
|
On the Collector node, it records the filer latency statistics for NetApp filers. |
SyncScansJob
|
Gets current scan status from all Collector nodes. The scan status is displayed on the Settings > Scanning Dashboard > In-progress Scans tab of the Management Console. |
SPEnableAuditJob
|
Enables auditing for site collections (within the web application), which have been added to Data Insight for monitoring.
By default, the job runs every 10 minutes.
|
SPAuditJob
|
Collects the audit logs from the SQL Server database for a SharePoint web application and generates SharePoint audit databases in Data Insight. |
SPScannerJob
|
Scans the site collections at the scheduled time and fetch data about the document and picture libraries within a site collection and within the sites in the site collection. |
NFSUserMappingJob
|
Maps every UID in raw audit files for NFS and VxFS with an ID generated for use in Data Insight. Or generates an ID corresponding to each User and Group ID in raw audit files received from NFS/VxFS. |
MsuAuditJob
|
Collects statistics information for all indexers on the Indexer. |
MsuMigrationJob
|
Checks whether a filer migration is in process and carries it out. |
ProcessEventsJob
|
Processes all the Data Insight events received from worker nodes and adds them to the yyyy-mm-dd_events.db file on the Management Server. |
ProcessEventsJob_SE
|
Processes scan error files. |
SpoolEventsJob
|
Spools events on worker nodes to be sent to Management Server. |
WFStatusMergeJob
|
Merges the workflow and action status updates for remediation workflows (DLP Incident Remediation, Entitlement Reviews, Ownership Confirmation), Enterprise Vault archiving, and custom actions and update the master workflow database with the details so that users can monitor the progress of workflows and actions from the Management Console. |
UpdateConfigJob
|
Reconfigures jobs based on the configuration changes made on the Management Server. |
DeviceAuditJob
|
Fetches the audit records from the Hitachi NAS EVS that are configured with Data Insight.
By default, this job runs in every 5 seconds.
|
HNasEnableAuditJob
|
Enables the Security Access Control Lists (SACLs) for the shares when a Hitachi NAS filer is added.
By default, this job runs in every 10 minutes.
|
WorkflowActionExecutionJob
|
This service reads the request file created on the Management Server when a Records Classification workflow is submitted from the Portal. The request file contains the paths on which an Enterprise Vault action is submitted. When the action on the paths is complete, the job updates the request file with the status of the action.
By default, this job runs in every 1 hour.
|
UserRiskJob
|
Runs on each Indexer. The job updates hashes used to compute the user risk score.
By default, the job runs at 2:00 A.M. everyday.
|
UpdateWFCentralAuditDBJob
|
Runs only on the Management Server. It is used to update the workflow audit information in <DATA_DIR>/workflow/workflow_audit.db .
By default, this job runs every 1 minute.
|
TagsConsumerJob
|
Parses the CSV file containing tags for paths. Imports the attributes into Data Insight and creates a Tags database for each filesystem object.
By default, this job runs once every day.
|
KeyRotationJob
|
Run the job on demand to change the encryption keys. It is not an automatically scheduled job.
It is recommended to run this job after the Data Insight servers including Windows File Agent server is upgraded to 5.2.
If you want to run the KeyRotationJob without upgrading all the servers, restart all services on the servers that have not been upgraded after the KeyRotationJob is executed and the configuration database is replicated on these servers.
|
RiskDossierJob
|
Runs on each Indexer and computes the number of files accessible and number of sensitive files accessible to each user on each share.
This job runs every day at 11.00 P.M. by default.
|
ClassifyInputJob
|
Runs every 10 seconds on the Management Server.
The job processes the classification requests from the Data Insight console and
from reports for the consumption of the book keeping database.
|
ClassifyBatchJob
|
Runs every minute on the Indexer.
The job splits the classification batch input databases for the scanner's consumption,
which are later pushed to the Collector.
|
ClassifyIndexJob
|
Runs once every minute on the Indexer node.
Updates the index with classification tags and also updates the status of the book
keeping database.
|
ClassifyMergeStatusJob
|
Runs once every minute on the Management Server.
The job calls the files with the classification update status that are received from
each indexer. These files are automatically created on the indexer whenever updates
are available. It also updates the global book keeping database that is used to
show high level classification status on the Console.
|