ESPROFILER Handbook
Data Ops

Adding Vendors

1. Manual Vendor Addition

To add a vendor manually to the system, follow these steps:

  1. Open the Workflow: Navigate to the n8n Manual Vendor Workflow.
  2. Configure: Edit the Input fields with the necessary vendor details.
  3. Execute: Click Save, then click Execute Workflow.

Known Issue: Manual Logo & Banner Upload Required Due to recent system changes, vendor logos are not automatically populated during the manual creation process. Action Required:

  1. Approve the vendor from the Draft Vendor Screen.
  2. Navigate to the Vendor Profile within the Vendor Portal.
  3. Manually upload the LinkedIn logo and the company banner.

2. Bright Data N8n Pipeline

The Bright Data pipeline is our primary method for large-scale vendor discovery. We receive monthly exports of LinkedIn company data which include basic vendor information and enriched professional data. This pipeline identifies relevant cybersecurity vendors and automatically adds them to the platform.

Phase A: Data Preparation

Before the data can be processed by n8n, it must be partitioned into smaller chunks for stability.

  1. Download: Get the latest Bright Data export from the S3 Feeds Bucket (Production Account).
  2. Upload: Move the downloaded file to the ESPROFILER Development AWS S3 account at s3://esp-dev-bright-data/uploads/.
  3. Partition: Run the dedicated Lambda function to split the file into chunks (100 records per chunk).
    • Lambda ARN: arn:aws:lambda:eu-west-2:339713162929:function:test-bright-data-partition
    • Execution:
      • Navigate to the Lambda function in the AWS Dev Console.
      • Go to the Test section and select the bright_data_chunking test event.
      • Edit the JSON payload with the correct sourceKey (the path to your uploaded file).
      • Save and run the test.
    • Critical Step: Record the total number of chunks generated. You will need this count to configure the n8n workflow in Phase B.

Video Guide: Chunking Bright Data

Access & Permissions

  • Production S3: Requires IAM permissions for the feeds S3 bucket. Contact Louis to be added to the appropriate policy group.
  • Dev S3 & Lambda: Contact Mohammad Umair (Account ID: 3397-1316-2929) for admin access to the Development account.

Phase B: Automated ETL Sequence

Once the files are chunked, the processing is initiated within n8n.

  1. Trigger Workflow: Open and run the brightdataTriggerEtlV3.0 workflow.
  2. Manual Config: Manually update the File Name and the Range of Files within the workflow based on the chunk count recorded in Phase A.

Video Guide: Running the ETL Workflow

The system then triggers the following sequence via RabbitMQ queues:

OrderWorkflow NameResponsibilityn8n Link
1triggerBrightDataS3ExportV3.0Reads chunks and performs keyword matching.View Workflow
2triggerBrightDataVendorRelevancyV2.0Determines if the vendor is cybersecurity-relevant.View Workflow
3triggerBrightDataVendorEnrichmentV2.0Enriches data using LinkedIn records.View Workflow
4triggerBrightDataVendorCreateV3.0Adds validated vendors to the Draft Vendor table.View Workflow

Phase C: Final Approval

Vendors identified by the pipeline must be manually reviewed before they are fully integrated.

  1. Review: Log into the CPS Manage Space.
  2. Navigate: Go to the Draft Vendor space to see pending requests.
  3. Action: Review each record and either Approve or Reject.
  4. Automatic Trigger: Once approved, the system sends a message to the draft.product.create bucket to begin product identification.

Video Guide: Reviewing Draft Vendors


3. Product Creation For More Detail

The product creation pro ess is event-driven and begins automatically after a vendor is approved.

  1. Trigger: A message is dropped into the dep.draft.product.create bucket.
  2. Processing: An n8n workflow identifies cybersecurity products, features, and use cases.
  3. Order of Execution: Entities are created in a strict hierarchy: ProductsFeaturesUse Cases.
Copyright © 2026