Skip to main content

CSV Fault Tolerance

Real-world CSV files rarely arrive perfectly clean. A single bad row (a stray comma, a wrong data type, a trailing blank) would normally fail the whole file. With CSV fault tolerance enabled, the listener treats malformed rows as a per-row issue instead of a per-file one. It skips rows that don't fit your schema and hands the rest to the handler as if nothing happened.

What the handler sees

Feature stateBad row encounteredHandler receivesFile outcome (default)
Off (default)The first malformed row trips data bindingAn error instead of contentMoves to the After Error destination
OnThe row is dropped before the handler is calledOnly valid rows, as usualMoves to the After Success destination
Fault tolerance needs a typed row schema

Fault tolerance only skips rows that fail typed binding. If your handler is generated with the default string[][] content type, every row is valid as a string array and nothing is ever dropped.

On the Add File Handler form, click + Define Row Schema and describe each column as a field on a Ballerina record. This flips the handler parameter to a typed array (for example Order[]), and rows that don't match trigger binding errors that fault tolerance can skip. See the row-schema step in Streaming large files for a walkthrough.

Fault tolerance combines cleanly with:

  • Streaming — works with streamed CSV. A bad row no longer terminates the stream; processing continues through the rest of the file.
  • Move / Delete post-processing — because the handler completes successfully, the file follows the After Success action as it would for a clean run.

Configuration

Fault tolerance is a listener-level setting. Turn it on once per listener and every CSV handler attached to that service inherits the behaviour. It is part of the listener's regular configuration, not tucked under Advanced.

  1. Open the listener by clicking its name under Listeners in the sidebar, or under Attached Listeners in the FTP Integration Configuration panel.

  2. Scroll to the Csv Fail Safe field and click Record to open the builder.

  3. In the Record Configuration panel, tick the top-level FailSafeOptions checkbox to include the record, tick contentType, and pick an enum value:

    contentType valueWhat gets logged for a dropped row
    METADATA (default)Row number, column, and the binding error.
    RAWThe raw row text as it appeared in the source file.
    RAW_AND_METADATABoth.

    Record Configuration panel with FailSafeOptions and contentType selected

  4. Close the panel and click Save. Every CSV handler on every service attached to this listener now skips malformed rows.

Dropped-row log

When fault tolerance is on, the listener writes each dropped row to a side log file. The filename is the source CSV's basename with its extension replaced by _error.log, and the file is created in the integration's working directory:

incoming/orders-2026.csv   →   <working-dir>/orders-2026_error.log

Each dropped row becomes one JSON line. The contentType setting picked above controls which fields appear:

contentType valuetimelocation.{row,column}offendingRowmessage
METADATA (default)
RAW
RAW_AND_METADATA

Example entry with RAW_AND_METADATA:

{"time":"2026-04-19T05:17:27.239Z","location":{"row":3,"column":3},"offendingRow":"2,sprocket,five,4.75","message":"value 'five' cannot be cast into 'int'"}

The file is opened in append mode, so repeated drops for files whose names share a prefix accumulate in the same log.

The _error.log filename, location, and JSON layout are the built-in defaults for the onFileCsv handler. If you need a different file name, a different directory, or a custom log format, switch to an onFileText handler and parse the CSV yourself with csv:parseString. This way you control every aspect of error handling from there. See CSV & Flat File Processing for the parser reference and the handler pattern.

Dropped rows don't flip the file to After Error

The listener's After Success and After Error branches are picked based on whether the handler returned an error. A dropped row is not itself an error. Even if every row in the file gets dropped, the handler still receives an empty typed array and the file takes the After Success path by default.

When to leave it off

ScenarioRecommendation
Partner agreement says every row must be accounted forLeave fault tolerance off. A single bad row should fail the file and route it to the error directory for replay.
The feed is known-dirty and most rows are usableTurn it on. Clean rows proceed and dropped rows are logged.
You need files with any dropped rows to go to After ErrorLeave fault tolerance off, but note this will fail the whole file at the first bad row.
You need a different log format, filename, or locationSwitch to onFileText and parse with [csv:parseString].

What's next