Skip to main content

Execution Plan

Great question! An Execution Plan in NICE Actimize is a critical component that defines the sequence, timing, and dependencies of processing tasks within the platform. It's essentially the "blueprint" that orchestrates how data flows through various components and when specific operations should occur.An Execution Plan in NICE Actimize is essentially the "conductor of the orchestra" - it orchestrates how all the different components work together to process data and detect financial crimes.

Key Points About Execution Plans:

What They Do:

  • Orchestrate workflows from data ingestion to final reporting
  • Manage dependencies between different processing tasks
  • Handle errors and ensure system reliability
  • Optimize performance through parallel processing and resource management

Where You'll Encounter Them:

  • Batch Processing: Daily/nightly fraud detection runs
  • Real-Time Flows: Live transaction processing pipelines
  • Investigation Workflows: Case management and alert routing
  • Reporting: Automated report generation and distribution

Real-World Impact:

In the example I provided, a bank processes millions of transactions every night. The execution plan ensures:

  • All data is collected from multiple sources
  • Processing happens in the right order
  • If something fails, the system knows how to recover
  • Reports are ready for analysts when they arrive in the morning

Why They're Critical:

Without well-designed execution plans, you'd have:

  • Data processing chaos
  • Failed fraud detection
  • Regulatory compliance issues
  • Poor system performance

Think of execution plans as the "project management system" for NICE Actimize - they ensure everything happens when it should, in the right order, with proper error handling.


NICE Actimize Execution Plan: Complete Guide

What is an Execution Plan?

An Execution Plan in NICE Actimize is a structured workflow definition that specifies:

  • What tasks need to be executed
  • When they should run (sequence and timing)
  • How they interact with each other (dependencies)
  • Where the processing occurs (which servers/components)
  • What conditions trigger execution or failure handling

Think of it as a "recipe" that tells the system exactly how to process data from start to finish.

Types of Execution Plans

1. Batch Execution Plans

Used for scheduled, high-volume data processing operations.

Common Examples:

  • Daily transaction file processing
  • End-of-day risk calculations
  • Monthly customer profiling updates
  • Regulatory report generation

2. Real-Time Execution Plans

Used for immediate transaction processing and fraud detection.

Common Examples:

  • Live transaction scoring
  • Immediate alert generation
  • Real-time decision making
  • Instant authentication verification

3. Event-Driven Execution Plans

Triggered by specific events or conditions.

Common Examples:

  • Investigation workflow triggers
  • Exception handling procedures
  • System maintenance routines
  • Data quality checks

Execution Plan Components

1. Tasks/Steps

Individual processing units within the plan.

Examples:

  • Data extraction from source systems
  • Data transformation and validation
  • Risk scenario execution
  • Alert generation
  • Report creation

2. Dependencies

Define the order and relationships between tasks.

Types:

  • Sequential: Task B starts after Task A completes
  • Parallel: Multiple tasks run simultaneously
  • Conditional: Task execution based on conditions
  • Branching: Different paths based on outcomes

3. Triggers

Events that initiate the execution plan.

Common Triggers:

  • Time-based: Scheduled execution (daily, hourly, etc.)
  • File-based: New file arrival in designated folder
  • Event-based: System events or external signals
  • Manual: User-initiated execution

4. Error Handling

Defines what happens when tasks fail.

Options:

  • Retry logic: Attempt failed task again
  • Skip and continue: Move to next task
  • Abort plan: Stop entire execution
  • Alternative path: Switch to backup procedure

Real-Life Example: Daily Fraud Detection Execution Plan

Scenario: Regional Bank XYZ

Requirement: Process previous day's transactions for fraud detection and generate management reports.

Execution Plan Structure:

Plan Name: "Daily_Fraud_Processing_Plan"
Execution Time: 2:00 AM daily
Expected Duration: 3 hours
Server: ACTIMIZE-BATCH-01

Phase 1: Data Collection (30 minutes)

Task 1.1: Extract Transaction Data

  • Source: Core banking system transaction tables
  • Target: Actimize staging area
  • Dependency: None (starting task)
  • Parameters:
    • Date range: Previous business day
    • Transaction types: All payment channels
  • Success Criteria: All expected files received
  • Error Handling: Retry 3 times, then alert operations team

Task 1.2: Extract Customer Data

  • Source: CRM and account management systems
  • Target: Actimize customer staging
  • Dependency: Can run parallel with Task 1.1
  • Parameters: Customer profile updates and new accounts
  • Success Criteria: Customer count matches expected range

Task 1.3: Extract Reference Data

  • Source: External fraud databases, merchant databases
  • Target: Actimize reference tables
  • Dependency: Can run parallel with Tasks 1.1 and 1.2
  • Parameters: Updated merchant risk scores, fraud indicators

Phase 2: Data Processing (90 minutes)

Task 2.1: Data Validation and Cleansing

  • Input: Raw data from Phase 1
  • Process: Data quality checks, format validation, duplicate detection
  • Dependency: All Phase 1 tasks must complete successfully
  • Success Criteria: <1% data quality issues
  • Error Handling: Flag problematic records, continue with clean data

Task 2.2: Data Transformation to UDM

  • Input: Validated data from Task 2.1
  • Process: Map source fields to Actimize UDM format
  • Dependency: Task 2.1 completion
  • Output: UDM-compliant transaction and customer records
  • Success Criteria: 100% successful transformation

Task 2.3: Historical Data Integration

  • Input: Current day data + historical customer profiles
  • Process: Merge with existing customer behavior baselines
  • Dependency: Task 2.2 completion
  • Output: Enriched transaction records with historical context

Phase 3: Risk Analysis (60 minutes)

Task 3.1: Behavioral Analysis

  • Input: Enriched transaction data
  • Process: Calculate behavioral deviations and anomalies
  • Dependency: Task 2.3 completion
  • Components:
    • Amount deviation calculations
    • Time pattern analysis
    • Geographic anomaly detection
    • Frequency analysis

Task 3.2: Scenario Execution

  • Input: Transactions with behavioral scores
  • Process: Run all active fraud detection scenarios
  • Dependency: Task 3.1 completion
  • Scenarios Include:
    • Velocity monitoring
    • Card testing detection
    • Account takeover patterns
    • Money laundering indicators

Task 3.3: Risk Score Calculation

  • Input: Scenario results and behavioral analysis
  • Process: Calculate final composite risk scores
  • Dependency: Tasks 3.1 and 3.2 completion
  • Output: Risk-scored transaction database

Phase 4: Alert Generation (20 minutes)

Task 4.1: Alert Creation

  • Input: High-risk transactions (score > threshold)
  • Process: Generate fraud alerts with investigation packages
  • Dependency: Task 3.3 completion
  • Logic:
    • Critical alerts: Score > 250
    • High alerts: Score 150-250
    • Medium alerts: Score 100-149

Task 4.2: Alert Prioritization and Routing

  • Input: Generated alerts
  • Process: Assign priority levels and route to appropriate investigators
  • Dependency: Task 4.1 completion
  • Rules:
    • VIP customers → Senior analysts
    • High-value transactions → Experienced team
    • Standard alerts → General queue

Phase 5: Reporting (20 minutes)

Task 5.1: Management Dashboard Update

  • Input: Processing results and alert statistics
  • Process: Update real-time management dashboards
  • Dependency: All previous phases completion
  • Metrics:
    • Transactions processed
    • Alerts generated by type
    • Processing performance

Task 5.2: Regulatory Report Generation

  • Input: Suspicious transactions and patterns
  • Process: Generate required regulatory reports
  • Dependency: Task 4.2 completion
  • Outputs:
    • Daily suspicious activity summary
    • Transaction monitoring statistics
    • Model performance metrics

Task 5.3: Performance Metrics Calculation

  • Input: Execution statistics from all tasks
  • Process: Calculate SLA compliance and performance metrics
  • Dependency: All tasks completion
  • Metrics:
    • Processing time by phase
    • Data quality statistics
    • System resource utilization

Execution Plan Configuration

1. Plan Definition in Actimize

Execution Plan Designer → New Plan → Daily Fraud Processing

Basic Configuration:

  • Plan Name: Daily_Fraud_Processing_Plan
  • Description: Daily batch processing for fraud detection
  • Owner: Fraud Operations Team
  • Priority: High
  • Maximum Duration: 4 hours

2. Task Configuration

Example Task Configuration:

Task Name: Extract_Transaction_Data
Task Type: Data Extraction
Executable: /opt/actimize/scripts/extract_transactions.sh
Parameters:
- START_DATE: ${BUSINESS_DATE}
- END_DATE: ${BUSINESS_DATE}
- SOURCE_DB: CORE_BANKING
Input Dependencies: None
Output: STAGING_TRANSACTIONS table
Success Criteria: Row count > 50000 AND Row count \< 2000000
Timeout: 30 minutes
Retry Logic: 3 attempts with 5-minute intervals

3. Dependency Management

Sequential Dependency:

Task A → Task B → Task C

Parallel Execution:

Task A → Task B ┐
├→ Task D
Task A → Task C ┘

Conditional Branching:

Task A → Decision Point → If success: Task B
→ If failure: Task C

4. Monitoring and Control

Real-Time Monitoring Dashboard:

  • Current executing task
  • Progress percentage
  • Estimated completion time
  • Resource utilization
  • Error/warning counts

Control Options:

  • Pause: Temporarily halt execution
  • Resume: Continue paused execution
  • Abort: Stop execution immediately
  • Skip Task: Skip current task and continue
  • Retry Task: Restart failed task

Advanced Features

1. Dynamic Parameters

${BUSINESS_DATE}: Automatically calculated business date
${PREV_BUSINESS_DATE}: Previous business day
${MONTH_END}: Last day of current month
${BATCH_ID}: Unique identifier for this execution

2. Conditional Logic

IF transaction_count > 1000000 THEN
Execute high_volume_processing_path
ELSE
Execute standard_processing_path

3. Load Balancing

IF server_cpu_usage > 80% THEN
Route to backup_server
ELSE
Continue on primary_server

4. Checkpointing

After each major phase completion:
Save execution state
Enable restart from last checkpoint if failure occurs

Best Practices

1. Design Principles

  • Modularity: Break complex processes into smaller, manageable tasks
  • Parallelization: Run independent tasks simultaneously when possible
  • Fault Tolerance: Design for graceful failure handling
  • Monitoring: Include comprehensive logging and alerting

2. Performance Optimization

  • Resource Management: Allocate appropriate CPU, memory, and I/O resources
  • Data Partitioning: Process large datasets in manageable chunks
  • Caching: Cache frequently accessed reference data
  • Indexing: Ensure proper database indexing for performance

3. Error Handling

  • Graceful Degradation: Continue processing when non-critical tasks fail
  • Detailed Logging: Capture sufficient detail for troubleshooting
  • Notification: Alert appropriate teams of failures
  • Recovery Procedures: Define clear steps for manual intervention

4. Testing and Validation

  • Unit Testing: Test individual tasks in isolation
  • Integration Testing: Test complete execution plans
  • Performance Testing: Validate under expected data volumes
  • Disaster Recovery: Test recovery procedures regularly

Common Execution Plan Patterns

1. ETL Pattern

Extract → Transform → Load → Validate → Report

2. Fan-Out/Fan-In Pattern

Single Input → Multiple Parallel Processes → Consolidation → Output

3. Pipeline Pattern

Stage 1 → Stage 2 → Stage 3 → Stage 4 (overlapping execution)

4. Event-Driven Pattern

Event Trigger → Decision Logic → Action Execution → Result Processing

Troubleshooting Common Issues

1. Performance Problems

  • Symptoms: Execution taking longer than expected
  • Diagnosis: Check resource utilization, database locks, network latency
  • Solutions: Optimize queries, increase resources, implement parallelization

2. Data Quality Issues

  • Symptoms: High error rates, unexpected data volumes
  • Diagnosis: Examine data validation logs, compare with source systems
  • Solutions: Implement data quality checks, coordinate with source system owners

3. Dependency Failures

  • Symptoms: Tasks waiting indefinitely, orphaned processes
  • Diagnosis: Check task status, review dependency configuration
  • Solutions: Fix dependency logic, implement timeout mechanisms

4. Resource Contention

  • Symptoms: Tasks failing due to insufficient resources
  • Diagnosis: Monitor CPU, memory, disk, and network usage
  • Solutions: Reschedule execution times, increase resources, optimize processes

Execution Plans are fundamental to NICE Actimize operations, providing the orchestration layer that ensures reliable, efficient, and scalable processing of financial crime detection workflows.


Excellent question! I need to be more precise about where execution plan configuration actually occurs in NICE Actimize. Let me clarify the correct components:

Where Execution Plans Are Actually Configured:

1. AIS (Actimize Intelligence Server) - Primary Location

AIS Admin Console → Job Management → Execution Plans

Path: AIS Administrative Interface → Scheduler → Job Definitions

What you configure here:

  • Job sequences and dependencies
  • Task execution order
  • Parameter passing between jobs
  • Error handling and retry logic
  • Resource allocation

2. Control-M Integration (Common in Enterprise Deployments)

Control-M Workbench → Job Definitions → Actimize Jobs

Why Control-M:

  • Enterprise job scheduling standard
  • Advanced dependency management
  • Cross-system workflow orchestration
  • Production-grade monitoring and alerting

3. ETL Tools Integration

SSIS (SQL Server Integration Services) → Package Designer
or
DataStage → Job Designer
or
Informatica → Workflow Manager

4. Platform-Specific Locations:

For IFM-X:

IFM-X Admin Console → System Administration → Batch Processing → Job Scheduler

For AML (WLF/SAM):

WLF Admin → System Configuration → Batch Jobs
SAM Admin → Job Management → Execution Plans

For ActOne (Investigation Platform):

ActOne Admin → Workflow Designer → Process Definitions

Corrected Real-World Example:

AIS Admin Console Configuration:

Step 1: Access Job Management

AIS Admin Console → Login → Job Management → Job Definitions

Step 2: Create New Job Plan

Job Definitions → New Job Plan
Name: Daily_Fraud_Processing_Plan
Description: Daily batch processing for fraud detection
Owner: fraud_ops_team
Priority: High

Step 3: Define Job Steps

Job Steps Tab:
Step 1: extract_transactions
- Executable: /opt/actimize/jobs/extract_data.sh
- Parameters: date=${BUSINESS_DATE}
- Dependencies: None

Step 2: transform_data
- Executable: /opt/actimize/jobs/transform_udm.sh
- Parameters: input_file=${Step1.output_file}
- Dependencies: Step 1 (Success)

Step 3: run_scenarios
- Executable: /opt/actimize/jobs/execute_scenarios.sh
- Parameters: data_date=${BUSINESS_DATE}
- Dependencies: Step 2 (Success)

Step 4: Configure Scheduling

Schedule Tab:
Trigger Type: Time-based
Schedule: Daily at 02:00 AM
Time Zone: EST
Calendar: Business Days Only

Step 5: Error Handling

Error Handling Tab:
On Failure: Send notification to fraud_ops@bank.com
Retry Logic: 3 attempts with 10-minute intervals
Continue on Warning: Yes
Abort on Critical Error: Yes

Alternative Configuration Methods:

1. Command Line Interface (CLI)

# AIS Command Line
ais_admin -create_job -name "Daily_Fraud_Processing"
-schedule "0 2 * * *"
-script "/opt/actimize/scripts/daily_processing.sh"

2. Configuration Files (XML/JSON)

<!-- Job definition file -->
<job-definition name="Daily_Fraud_Processing">
<schedule>0 2 * * *</schedule>
<steps>
<step name="extract" script="/opt/actimize/extract.sh"/>
<step name="transform" depends="extract" script="/opt/actimize/transform.sh"/>
<step name="analyze" depends="transform" script="/opt/actimize/analyze.sh"/>
</steps>
</job-definition>

3. Database Tables (Direct Configuration)

-- AIS job configuration tables
INSERT INTO AIS_JOB_DEFINITIONS (job_name, schedule_cron, owner)
VALUES ('Daily_Fraud_Processing', '0 2 * * *', 'fraud_ops');

INSERT INTO AIS_JOB_STEPS (job_id, step_order, step_name, executable)
VALUES (1, 1, 'extract_data', '/opt/actimize/extract.sh');

Component-Specific Interfaces:

IFM-X Specific:

IFM-X Management Console → Configuration → Processing → Batch Jobs

AML (Anti-Money Laundering) Specific:

AML Admin Console → System Setup → Job Scheduler → Execution Plans

Trade Surveillance Specific:

Trade Surveillance Admin → Configuration → Batch Processing

Modern Cloud-Based Implementations:

X-Sight Platform (Cloud-Native):

X-Sight Portal → Automation → Workflow Designer → Execution Plans

Kubernetes/Container Deployments:

Kubernetes Dashboard → Jobs/CronJobs → Actimize Processing Jobs

Important Note:

The specific interface depends on:

  • Actimize version (older vs. newer releases)
  • Deployment architecture (on-premise vs. cloud)
  • Integration approach (standalone vs. enterprise scheduler)
  • Platform components (IFM-X vs. AML vs. Trade Surveillance)

In my original response, I used "Execution Plan Designer" as a generic term, but the actual interface is typically the AIS Admin Console → Job Management section, or integrated enterprise schedulers like Control-M in production environments.