Skip to main content

FTP / SFTP

FTP, SFTP, and FTPS file integrations poll remote file servers for new files and process them as they arrive. Use them for ETL pipelines, batch processing, and B2B integrations where partners exchange data as CSV, XML, JSON, or binary files.

ProtocolDescriptionTransport securityAuthentication
FTPStandard file transfer protocol with no encryption. Suitable for internal networks or non-sensitive data.NoneAnonymous or username/password
SFTPFile transfer over an SSH tunnel. Use this for secure transfers when the remote server supports SSH.SSHUsername/password or private key
FTPSFTP extended with SSL/TLS encryption. Use this when the remote server requires FTP with certificate-based security.SSL/TLSUsername/password with certificate verification

Creating an FTP service

Use this flow for plain (unencrypted) FTP. Default port: 21. Supports anonymous connections and username/password authentication.

  1. Click + Add Artifact in the canvas or click + next to Entry Points in the sidebar.

  2. In the Artifacts panel, select FTP / SFTP under File Integration.

    Artifacts panel showing FTP / SFTP under File Integration

  3. In the Create FTP Integration form, keep Protocol set to ftp.

    Create FTP Integration form, FTP protocol

  4. Fill in the Listener Configuration:

    FieldDescription
    Listener NameIdentifier for this listener (e.g., ftpListener).
    HostHostname or IP address of the remote server (e.g., ftp.example.com).
    Port NumberPort to connect on. Set to 21 for standard FTP.
  5. Choose an authentication method:

    OptionFields revealedUse when
    No AuthenticationThe server accepts anonymous logins or a username alone without a password.
    Basic AuthenticationUsername, PasswordThe server requires a username and password.
  6. Enter the Monitoring Path — the directory on the remote server to poll for new files (e.g., /uploads). Defaults to /.

  7. Click Create. WSO2 Integrator opens the service in the Service Designer with the listener pill attached.

    Service Designer showing the FTP service canvas

  8. Click + Add File Handler to define how incoming files are processed.

Creating an FTPS service

Use this flow for FTP over SSL/TLS. Default port: 21 (explicit FTPS) or 990 (implicit FTPS). The listener performs a TLS handshake before any credentials are exchanged, so certificate material is always part of the configuration.

  1. Click + Add ArtifactFTP / SFTP under File Integration.

  2. In the Create FTP Integration form, select Protocolftps.

    Create FTP Integration form, FTPS protocol

  3. Fill in the Listener Configuration:

    FieldDescription
    Listener NameIdentifier for this listener (e.g., ftpsListener).
    HostHostname or IP address of the remote server (e.g., ftps.example.com).
    Port NumberPort to connect on. Set to 21 for explicit FTPS or 990 for implicit FTPS.
  4. Choose an authentication method:

    OptionFields revealedUse when
    No AuthenticationThe server accepts anonymous TLS connections.
    Basic AuthenticationUsername, PasswordThe server requires credentials over the encrypted channel. Typical for FTPS.
  5. Expand Advanced Configurations. The Secure Socket field is required for FTPS. Click Record and supply the SSL/TLS configuration — at minimum a truststore or certificate path so the client can verify the server:

    {
    cert: {path: "/path/to/truststore.jks", password: "changeit"}
    }

    See SSL/TLS configuration for the full field set, including mutual TLS and protocol-version pinning.

  6. Enter the Monitoring Path and click Create.

  7. Click + Add File Handler in the Service Designer to define how incoming files are processed.

Creating an SFTP service

Use this flow for SFTP (FTP over SSH). Default port: 22. The form collects the SSH private key material and matching username. For username/password SFTP authentication, use the Ballerina Code tab below.

  1. Click + Add ArtifactFTP / SFTP under File Integration.

  2. In the Create FTP Integration form, select Protocolsftp.

    Create FTP Integration form, SFTP protocol

  3. Fill in the Listener Configuration:

    FieldDescription
    Listener NameIdentifier for this listener (e.g., sftpListener).
    HostHostname or IP address of the remote server (e.g., sftp.example.com).
    Port NumberPort to connect on. Set to 22 for SFTP.
  4. Choose Certificate-Based Authentication under authentication method. This reveals the Private Key and Username fields.

  5. Enter the Private Key record. Click Record on the field and supply:

    {path: "/path/to/private_key"}

    If the private key is passphrase-protected, include the passphrase in the record:

    {path: "/path/to/private_key", password: "my-passphrase"}
  6. Enter the Username that matches the configured private key and the Monitoring Path.

  7. Click Create. Then click + Add File Handler in the Service Designer.

File handlers

A file handler is a remote function that WSO2 Integrator calls each time the listener's polling cycle detects a file event in the monitored directory. A service can declare any combination of the three handler types:

HandlerTrigger
onCreate (onFileText / onFileJson / onFileXml / onFileCsv / onFile)A new file matching the service's fileNamePattern appears on the remote server. The function name depends on the content type — one variant per file format.
onFileDeleteA previously seen file is no longer present on the remote server.
onErrorThe runtime could not map incoming content to a typed onCreate handler — for example, a JSON handler received malformed JSON.

At least one onCreate or onFileDelete handler is required — a service with only an onError handler is not valid.

onFileDeleted is also supported as a legacy/deprecated delete callback. Prefer onFileDelete for new services.

Adding a file handler

In the Service Designer, click + Add File Handler and pick onCreate, onDelete, or onError. The handler configuration panel opens on the right.

File handler configuration panel

FieldDescription
File Format(onCreate only) The format of incoming files. Determines the handler function name and the type of the content parameter. Options: TEXT, JSON, XML, CSV, RAW. See Content types.
Rows(CSV only) The content schema is defined per row — each CSV row maps to a record type (Row schema).
Stream (Large Files)(CSV and RAW) Process the file content in chunks instead of loading it all into memory. See Typed content and streaming.
+ Define Row Schema(CSV only) Map CSV rows to a typed record. See Typed content and streaming.
+ Define Content Schema(JSON, XML only) Map the document to a typed record. See Typed content and streaming.
After File Processing — SuccessAction to take when the handler completes without error: Move to a destination path or Delete the file. See Post-processing.
After File Processing — ErrorAction to take when the handler returns an error: Move to an error directory or Delete the file.

Expand Advanced Parameters for optional handler parameters:

FieldDescription
File Metadata (fileInfo)Include ftp:FileInfo as a parameter in the handler function, giving access to file name, path, size, and other metadata. See FileInfo.
FTP Connection (caller)Include ftp:Caller as a parameter, giving access to read/write operations on the same server. See Caller operations.

Click Save to add the handler.

Content types

The File Format chosen on an onCreate handler determines the function name and the type of the content parameter.

File FormatHandler functionContent typeUse when
TEXTonFileTextstringFiles are plain text (logs, EDI, custom formats).
JSONonFileJsonjson or a typed recordFiles are JSON documents.
XMLonFileXmlxml or a typed recordFiles are XML documents.
CSVonFileCsvstring[][], record[], or a stream variantFiles are comma-separated values. Use a typed record to map rows automatically. Enable Stream for large files.
RAWonFilebyte[] or stream<byte[], error?>Binary files or when you need raw byte access. Enable Stream for large files.

Post-processing: moving or deleting files

Once a handler finishes — whether successfully or with an error — the runtime can move the file to another directory on the same server or delete it outright. You configure this directly on the Add File Handler form; switch to the Ballerina Code tab only when you need to review or adjust the generated annotation.

The Add File Handler form's After File Processing section has two independent toggles:

EventTicked by default?Action pickerExtra input
SuccessYesMove or DeleteMove To destination path (required when Move is chosen)
ErrorYesMove or DeleteMove To destination path (required when Move is chosen)

Common combinations:

  • Move on success, move on error — archive processed files and quarantine failures. Set separate destinations like /processed and /errors.
  • Delete on success, move on error — discard successfully processed files, keep failures for review.
  • Leave the file alone for this outcome — untick Success or Error to skip the action for that side.

The choices update the handler's @ftp:FunctionConfig annotation as you toggle; switch to the Ballerina Code tab to review the generated annotation.

Typed content and streaming

CSV, JSON, and XML handlers can receive their payload as a free-form type (string[][], json, xml) or as a typed record you define. CSV and RAW can additionally deliver the content as a stream<T, error?> so the handler never holds the whole file in memory. Both options are set on the Add File Handler form.

Map the payload to a typed record. Click the button exposed by the selected File Format:

File FormatButtonWhat it does
CSV+ Define Row SchemaOpens a record builder for a single CSV row; the handler's content parameter becomes YourRow[].
JSON+ Define Content SchemaOpens a record builder for the whole JSON document; the handler's content parameter becomes YourRecord.
XML+ Define Content SchemaSame as JSON — the handler receives a typed record mapped from the XML document.
TEXT, RAWContent is always a string or byte[]; no schema to define.

The record builder lets you add fields one at a time. Each field gets a name and a Ballerina type (string, int, decimal, boolean, or another record). Saving the schema updates the handler signature so you get type-checked field access inside the body.

Stream the content for large files. On CSV and RAW handlers, tick Stream (Large Files). The content parameter becomes a stream<T, error?> and the runtime pipes the file to the handler incrementally — iterate with content.forEach(...) instead of holding the whole payload.

Stream combines with Define Row Schema (CSV) — ticking both produces stream<YourRow, error?>. JSON and XML always parse the entire document at once and do not offer streaming.

FileInfo

Each handler receives an ftp:FileInfo parameter with metadata about the incoming file.

FieldTypeDescription
namestringFile name without path
pathstringRelative path on the remote server
pathDecodedstringNormalized absolute path — use this for all caller-> operations
sizeintFile size in bytes
lastModifiedTimestampintLast-modified time as UNIX epoch milliseconds
extensionstringFile extension
isFilebooleantrue if the entry is a file
isFolderbooleantrue if the entry is a directory

Caller operations

For most use cases, the typed handler parameters (string, json, xml, records, streams) and @ftp:FunctionConfig post-processing actions are sufficient. When you need additional control — such as reading a related file, writing output to a different path, or managing files manually — add the ftp:Caller parameter to your handler. It provides typed read and write operations on the connected server using the same session.

Reading files:

OperationReturn typeDescription
caller->getText(path)string|ErrorRead file as plain text
caller->getBytes(path)byte[]|ErrorRead file as a byte array
caller->getJson(path, typedesc)T|ErrorRead and deserialize JSON; pass a typed record typedesc to get a typed result
caller->getXml(path, typedesc)T|ErrorRead and deserialize XML; pass a typed record typedesc to get a typed result
caller->getCsv(path, typedesc)T|ErrorRead and parse CSV; pass string[][] or a record array typedesc
caller->getBytesAsStream(path)stream<byte[], error?>|ErrorRead as a byte stream for large files
caller->getCsvAsStream(path, typedesc)stream<T, error?>|ErrorRead CSV as a stream for large files

Writing files:

All write operations accept a ftp:FileWriteOption third argument to control overwrite behaviour (OVERWRITE or APPEND).

OperationDescription
caller->putText(path, content, option)Write a string to a file
caller->putBytes(path, content, option)Write a byte array to a file
caller->putJson(path, content, option)Serialize and write JSON
caller->putXml(path, content, option)Serialize and write XML
caller->putCsv(path, content, option)Serialize and write CSV (string[][] or record[])
caller->putBytesAsStream(path, stream, option)Write a byte stream to a file

File management:

OperationReturn typeDescription
caller->delete(path)Error?Delete a file
caller->mkdir(path)Error?Create a directory
caller->exists(path)boolean|ErrorCheck whether a path exists on the server

Service and listener

Every FTP/SFTP integration you see in the project tree is built from two pieces:

ConstructRole
ListenerThe connection to the remote server. Holds the protocol, host, port, credentials, and how often to poll. Each listener represents one server.
ServiceThe processing logic for a single directory on that server. Holds the monitoring path, file filters, and the file handlers that run when a file arrives.

You can reuse either side of the pair. The same listener can feed several services (for example, different directories on one server that need different handling), and a single service can draw from several listeners (for example, a primary and a backup server feeding the same integration). Three topologies cover the common cases:

TopologyWhen to use
One listener ↔ one serviceThe default. One remote server, one integration handling one directory.
One listener ↔ many servicesOne remote server with multiple directories that need different handlers (for example, /orders and /invoices on the same FTP server). Create one listener and attach several services to it.
One service ↔ many listenersOne integration that drains two (or more) remote servers — typical for HA/failover setups or for consolidating identical file feeds from multiple partners.

One listener ↔ many services

Under Entry Points, each service appears as its own FTP Integration - <path> item. Under Listeners, you'll see a single listener shared by all of them:

Project tree showing two services under Entry Points sharing one listener

This is the default path: every FTP service you add after the first one starts on the Use existing option, so new services reuse the listener unless you opt out. See Reusing an existing listener when creating a service.

One service ↔ many listeners

A single FTP Integration - <path> entry lists both (or all) of its listeners under Attached Listeners in the FTP Integration Configuration panel:

Service Configuration panel showing two listeners attached to a single service

Build this topology by opening the service's Configure panel and clicking + Attach Listener — see Attaching an additional listener to an existing service.

For the general concept, see Services and listeners.

Attaching listeners to services

Once the integration has at least one listener, two flows wire up the topologies described under Service and listener. Sharing a listener is the default when you create a second FTP service from the Create FTP Integration form; attaching an additional listener to an existing service is an explicit action on the Service Configuration panel.

Reusing an existing listener when creating a service

Use this flow to build the one listener ↔ many services topology. After the first FTP service is saved, the Create FTP Integration form defaults to Use existing when you open it again — every subsequent service attaches to an existing listener unless you explicitly switch to Create new.

  1. Click + Add ArtifactFTP / SFTP under File Integration to open the Create FTP Integration form.

  2. When the integration already has at least one FTP listener, the Select an existing FTP listener or create a new one picker at the top of the form defaults to Use existing. Sharing a listener is the encouraged flow for the second and subsequent services. (Create new should only be used when you want a dedicated listener for this service.)

    Create new vs Use existing radio selector, with Use existing selected by default

  3. The Listener Name field is a dropdown prefilled with the first available listener. Pick a different one if needed. Protocol, Host, Port Number, and the authentication method are locked (they belong to the listener, not to this service) so you can see the settings the new service will inherit but cannot change them here.

    Use existing listener — listener fields locked, Monitoring Path editable

  4. Enter the Monitoring Path for the new service (this is service-level, so it stays editable even under Use existing) and click Create.

  5. The project tree now shows both services under Entry Points and the single shared listener under Listeners:

    Project tree showing two services sharing one listener

Attaching an additional listener to an existing service

Use this flow to build the one service ↔ many listeners topology. It attaches a second (or third) listener to a service that is already saved, so a single handler implementation processes files from every attached listener.

  1. Open the service you want to extend — click the FTP Integration - <path> entry in the project tree.

  2. In the Service Designer, click Configure to open the FTP Integration Configuration panel. The left-hand nav pane lists the service's current listeners under Attached Listeners.

  3. Scroll to the bottom of the panel and click + Attach Listener. The Attach Listener side panel opens with two tabs:

    Attach Listener side panel with the Existing Listeners tab

    TabUse when
    Existing ListenersYou already have another listener defined in the integration. Click its name to attach it. Listeners already bound to this service are omitted from the list.
    Create New ListenerYou want to spawn a new listener inline — same field set as the Create FTP Integration form's listener section.
  4. Pick the listener to attach. The panel closes and the Attached Listeners list in the left nav now includes both listeners:

    Attached Listeners list showing ftpShared and ftpBackup

    Click any listener name in this list to edit its configuration inline on the right.

Service configuration

The @ftp:ServiceConfig annotation controls what the service monitors — the directory path, file filters, age constraints, and dependency conditions.

In the Service Designer, click Configure to open the FTP Integration Configuration panel. The panel has a left-hand navigation that lists the service (FTP Integration) and every listener under Attached Listeners. Clicking a name pivots the right pane between the service's own configuration fields and the inline configuration for the selected listener — headed Configuration for <listenerName>.

Service Configuration panel with Attached Listeners left navigation

FieldDescription
Service ConfigurationService-level settings. Accepts an @ftp:ServiceConfig record literal such as { path: "/incoming", fileNamePattern: ".*\\.csv" }. Click the full-screen icon on the field to open the guided Record Configuration builder with checkboxes for optional fields. path is required; the other fields are optional.

@ftp:ServiceConfig fields:

FieldTypeDefaultDescription
pathstring"/"Directory on the remote server to monitor for new files.
fileNamePatternstring?Regex to filter which files trigger handlers. Only matching files are processed.
fileAgeFilterFileAgeFilter?Age bounds to skip files that are too new (still uploading) or too old (stale). See File dependency and trigger conditions.
fileDependencyConditionsFileDependencyCondition[]?Conditions that block processing until related files exist. See File dependency and trigger conditions.

The Attached Listeners list in the left nav pane shows every listener this service is attached to. Click a listener name to edit that listener's configuration inline on the right. To attach another listener, click + Attach Listener at the bottom of the panel — see Attaching listeners to services.

Listener configuration

The listener controls how to connect — protocol, host, authentication, polling interval, and connection behaviour. One listener represents one connection to one remote server. Open the Ftp Listener Configuration view by clicking the listener name (for example, ftpListener) under Listeners in the sidebar, or by clicking the listener name under Attached Listeners inside the FTP Integration Configuration panel.

FieldDescriptionDefault
NameIdentifier for the listener, used to attach services to it. Required.
ProtocolConnection protocol: ftp, sftp, or ftps.ftp
HostHostname or IP address of the remote server.127.0.0.1
PortPort number of the remote server.21
AuthAuthentication record carrying credentials, a private key, and/or an SSL/TLS secureSocket. See the per-protocol sections above for the typical shapes.
Polling IntervalSeconds between directory polls.60
User Dir Is RootWhen true, treats the login home directory as / and suppresses directory-change commands. Set this for chrooted or jailed servers.false
Lax Data BindingWhen true, data-binding errors on the handler's content parameter return () instead of surfacing as an error.false
Connect TimeoutConnection timeout in seconds.30.0
Socket ConfigSocket read/write timeouts. See ftp:SocketConfig reference.
ProxyProxy configuration for SFTP connections (SFTP only).
File Transfer ModeBINARY or ASCII. Use ASCII only for text-only files on servers that require line-ending conversion.BINARY
SFTP CompressionSSH compression algorithms to negotiate with the server (SFTP only).
SFTP SSH Known HostsPath to an SSH known_hosts file (SFTP only).
CSV Fail SafeFail-safe options for CSV content processing. Malformed records are skipped and written to a side file in the working directory.
Retry ConfigRetry configuration for transient failures during polling or file retrieval. For the retry-with-backoff mechanics and field reference, see ftp:RetryConfig.
CoordinationDistributed coordination for multi-instance deployments. See High availability.

What's next

  • Local files — monitor a local directory instead of a remote server
  • Connections — reuse FTP connection credentials across services
  • Data Mapper — transform incoming file payloads between formats