Skip to main content

Documentation Index

Fetch the complete documentation index at: https://powersync-convex.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

Jump to: Postgres | MongoDB | MySQL | SQL Server | Convex

Postgres

Version compatibility: PowerSync requires Postgres version 11 or greater.
Configuring your Postgres database for PowerSync generally involves three tasks:
  1. Ensure logical replication is enabled
  2. Create a PowerSync database user
  3. Create powersync logical replication publication
We have documented steps for some specific hosting providers:

1. Ensure logical replication is enabled

No action required: Supabase has logical replication enabled by default.

2. Create a PowerSync database user

-- Create a role/user with replication privileges for PowerSync
CREATE ROLE powersync_role WITH REPLICATION BYPASSRLS LOGIN PASSWORD 'myhighlyrandompassword';
-- Set up permissions for the newly created role
-- Read-only (SELECT) access is required
GRANT SELECT ON ALL TABLES IN SCHEMA public TO powersync_role;  

-- Optionally, grant SELECT on all future tables (to cater for schema additions)
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO powersync_role; 
To restrict read access to specific tables, explicitly list allowed tables for both the SELECT privilege, and for the publication mentioned in the next step (as well as for any other publications that may exist).

3. Create powersync publication

-- Create a publication to replicate tables. The publication must be named "powersync"
CREATE PUBLICATION powersync FOR ALL TABLES;
Note that the PowerSync Service has to read all updates present in the publication, regardless of whether the table is referenced in your Sync Streams / Sync Rules definitions. This can cause large spikes in memory usage or introduce replication delays, so if you’re dealing with large data volumes, you’ll want to specify a comma-separated subset of tables to replicate instead of FOR ALL TABLES.
The snippet above replicates all tables and is the simplest way to get started in a dev environment.

Prerequisites

The instance must be publicly accessible using an IPv4 address.
Access may be restricted to specific IPs if required — see IP Filtering.

1. Ensure logical replication is enabled

Set the rds.logical_replication parameter to 1 in the parameter group for the instance:

2. Create a PowerSync database user

Create a PowerSync user on Postgres:
-- SQL to create powersync user
CREATE ROLE powersync_role WITH BYPASSRLS LOGIN PASSWORD 'myhighlyrandompassword';

-- Allow the role to perform replication tasks
GRANT rds_replication TO powersync_role;

-- Set up permissions for the newly created role
-- Read-only (SELECT) access is required
GRANT SELECT ON ALL TABLES IN SCHEMA public TO powersync_role;

-- Optionally, grant SELECT on all future tables (to cater for schema additions)
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO powersync_role;
To restrict read access to specific tables, explicitly list allowed tables for both the SELECT privilege, and for the publication (as well as for any other publications that may exist).

3. Create powersync publication

-- Create a publication to replicate tables. The publication must be named "powersync"
CREATE PUBLICATION powersync FOR ALL TABLES;
Note that the PowerSync Service has to read all updates present in the publication, regardless of whether the table is referenced in your Sync Streams / Sync Rules definitions. This can cause large spikes in memory usage or introduce replication delays, so if you’re dealing with large data volumes, you’ll want to specify a comma-separated subset of tables to replicate instead of FOR ALL TABLES.
The snippet above replicates all tables and is the simplest way to get started in a dev environment.
PowerSync supports both Azure Database for PostgreSQL and Azure Database for PostgreSQL Flexible Server.

Prerequisites

The database must be accessible on the public internet. Once you have created your database, navigate to SettingsNetworking and enable Public access.

1. Ensure logical replication is enabled

Follow the steps as noted in this Microsoft article to allow logical replication.

2. Create a PowerSync database user

-- Create a role/user with replication privileges for PowerSync
CREATE ROLE powersync_role WITH REPLICATION BYPASSRLS LOGIN PASSWORD 'myhighlyrandompassword';
-- Set up permissions for the newly created role
-- Read-only (SELECT) access is required
GRANT SELECT ON ALL TABLES IN SCHEMA public TO powersync_role;  

-- Optionally, grant SELECT on all future tables (to cater for schema additions)
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO powersync_role; 
To restrict read access to specific tables, explicitly list allowed tables for both the SELECT privilege, and for the publication mentioned in the next step (as well as for any other publications that may exist).

3. Create powersync publication

-- Create a publication to replicate tables. The publication must be named "powersync"
CREATE PUBLICATION powersync FOR ALL TABLES;
Note that the PowerSync Service has to read all updates present in the publication, regardless of whether the table is referenced in your Sync Streams / Sync Rules definitions. This can cause large spikes in memory usage or introduce replication delays, so if you’re dealing with large data volumes, you’ll want to specify a comma-separated subset of tables to replicate instead of FOR ALL TABLES.
The snippet above replicates all tables and is the simplest way to get started in a dev environment.

1. Ensure logical replication is enabled

In Google Cloud SQL Postgres, enabling the logical replication is done using flags:

2. Create a PowerSync database user

-- Create a role/user with replication privileges for PowerSync
CREATE ROLE powersync_role WITH REPLICATION BYPASSRLS LOGIN PASSWORD 'myhighlyrandompassword';
-- Set up permissions for the newly created role
-- Read-only (SELECT) access is required
GRANT SELECT ON ALL TABLES IN SCHEMA public TO powersync_role;  

-- Optionally, grant SELECT on all future tables (to cater for schema additions)
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO powersync_role; 
To restrict read access to specific tables, explicitly list allowed tables for both the SELECT privilege, and for the publication mentioned in the next step (as well as for any other publications that may exist).

3. Create powersync publication

-- Create a publication to replicate tables. The publication must be named "powersync"
CREATE PUBLICATION powersync FOR ALL TABLES;
Note that the PowerSync Service has to read all updates present in the publication, regardless of whether the table is referenced in your Sync Streams / Sync Rules definitions. This can cause large spikes in memory usage or introduce replication delays, so if you’re dealing with large data volumes, you’ll want to specify a comma-separated subset of tables to replicate instead of FOR ALL TABLES.
The snippet above replicates all tables and is the simplest way to get started in a dev environment.

1. Ensure logical replication is enabled

To ensure logical replication is enabled:
  1. Select your project in the Neon Console.
  2. On the Neon Dashboard, select Settings.
  3. Select Logical Replication.
  4. Click Enable to ensure logical replication is enabled.

2. Create a PowerSync database user

-- Create a role/user with replication privileges for PowerSync
CREATE ROLE powersync_role WITH REPLICATION BYPASSRLS LOGIN PASSWORD 'myhighlyrandompassword';
-- Set up permissions for the newly created role
-- Read-only (SELECT) access is required
GRANT SELECT ON ALL TABLES IN SCHEMA public TO powersync_role;  

-- Optionally, grant SELECT on all future tables (to cater for schema additions)
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO powersync_role; 
To restrict read access to specific tables, explicitly list allowed tables for both the SELECT privilege, and for the publication mentioned in the next step (as well as for any other publications that may exist).

3. Create powersync publication

-- Create a publication to replicate tables. The publication must be named "powersync"
CREATE PUBLICATION powersync FOR ALL TABLES;
Note that the PowerSync Service has to read all updates present in the publication, regardless of whether the table is referenced in your Sync Streams / Sync Rules definitions. This can cause large spikes in memory usage or introduce replication delays, so if you’re dealing with large data volumes, you’ll want to specify a comma-separated subset of tables to replicate instead of FOR ALL TABLES.
The snippet above replicates all tables and is the simplest way to get started in a dev environment.
Fly Postgres is a Fly app with flyctl sugar on top to help you bootstrap and manage a database cluster for your apps.

1. Ensure logical replication is enabled

Once you’ve deployed your Fly Postgres cluster, you can use the following command to ensure logical replication is enabled:
fly pg config update --wal-level=logical

2. Create a PowerSync database user

-- Create a role/user with replication privileges for PowerSync
CREATE ROLE powersync_role WITH REPLICATION BYPASSRLS LOGIN PASSWORD 'myhighlyrandompassword';
-- Set up permissions for the newly created role
-- Read-only (SELECT) access is required
GRANT SELECT ON ALL TABLES IN SCHEMA public TO powersync_role;  

-- Optionally, grant SELECT on all future tables (to cater for schema additions)
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO powersync_role; 
To restrict read access to specific tables, explicitly list allowed tables for both the SELECT privilege, and for the publication mentioned in the next step (as well as for any other publications that may exist).

3. Create powersync publication

-- Create a publication to replicate tables. The publication must be named "powersync"
CREATE PUBLICATION powersync FOR ALL TABLES;
Note that the PowerSync Service has to read all updates present in the publication, regardless of whether the table is referenced in your Sync Streams / Sync Rules definitions. This can cause large spikes in memory usage or introduce replication delays, so if you’re dealing with large data volumes, you’ll want to specify a comma-separated subset of tables to replicate instead of FOR ALL TABLES.
The snippet above replicates all tables and is the simplest way to get started in a dev environment.

1. Ensure logical replication is enabled

No action required: PlanetScale has logical replication (wal_level = logical) enabled by default.

2. Create a PowerSync database user

-- Create a role/user with replication privileges for PowerSync
CREATE ROLE powersync_role WITH REPLICATION BYPASSRLS LOGIN PASSWORD 'myhighlyrandompassword';
-- Set up permissions for the newly created role
-- Read-only (SELECT) access is required
GRANT SELECT ON ALL TABLES IN SCHEMA public TO powersync_role;  

-- Optionally, grant SELECT on all future tables (to cater for schema additions)
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO powersync_role; 
To restrict read access to specific tables, explicitly list allowed tables for both the SELECT privilege, and for the publication mentioned in the next step (as well as for any other publications that may exist).

3. Create powersync publication

  -- Create a publication to replicate tables.
  -- PlanetScale does not support ON ALL TABLES so
  -- Specify each table you want to sync
  -- The publication must be named "powersync"
  CREATE PUBLICATION powersync
  FOR TABLE public.lists, public.todos;
Logical replication can be enabled for Render Postgres but you need to contact their support team. Here are some prerequisites before contacting them:
  • The disk size must be at least 10 GB.
  • You must be on a Professional workspace or higher.
The Render support team will ask you the following:
  • Database user for replication (you can use the default or create a new user yourself)
  • Schema(s)
  • Publication name (only if you want them to set FOR ALL TABLES; otherwise, you’ll be able to create publications per table yourself later)
If you want to create the publication FOR ALL TABLES, you must let their support team know that you want the publication name to be powersync.Additional notes they’ll share with you:
We will reserve approximately 1/8 of your storage for wal_keep_size. This will not be available for your normal operations and will always be reserved no matter what. We will also schedule maintenance for the database to pick up the changes. It will be initially scheduled for 14 days out with a deadline of 30 days out. Once the maintenance is added, you can reschedule to any time between immediately and the deadline. If you do nothing, it will run automatically at the initially scheduled time of 14 days out.

1. Ensure logical replication is enabled

ALTER SYSTEM SET wal_level = logical;
ALTER SYSTEM SET max_replication_slots = 10;
ALTER SYSTEM SET max_wal_senders = 10;

2. Create a PowerSync database user

-- Create a role/user with replication privileges for PowerSync
CREATE ROLE powersync_role WITH REPLICATION BYPASSRLS LOGIN PASSWORD 'myhighlyrandompassword';
-- Set up permissions for the newly created role
-- Read-only (SELECT) access is required
GRANT SELECT ON ALL TABLES IN SCHEMA public TO powersync_role;  

-- Optionally, grant SELECT on all future tables (to cater for schema additions)
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO powersync_role; 
To restrict read access to specific tables, explicitly list allowed tables for both the SELECT privilege, and for the publication mentioned in the next step (as well as for any other publications that may exist).

3. Create powersync publication

-- Create a publication to replicate tables. The publication must be named "powersync"
CREATE PUBLICATION powersync FOR ALL TABLES;
Note that the PowerSync Service has to read all updates present in the publication, regardless of whether the table is referenced in your Sync Streams / Sync Rules definitions. This can cause large spikes in memory usage or introduce replication delays, so if you’re dealing with large data volumes, you’ll want to specify a comma-separated subset of tables to replicate instead of FOR ALL TABLES.
The snippet above replicates all tables and is the simplest way to get started in a dev environment.
See Xata’s documentation for more information on setting up logical replication with Xata.
For other providers and self-hosted databases:
Need help? Simply contact us on Discord and we’ll help you get set up.

1. Ensure logical replication is enabled

PowerSync reads the Postgres WAL using logical replication in order to create buckets in accordance with your Sync Streams (or legacy Sync Rules).If you are managing Postgres yourself, set wal_level = logical in your config file:
Alternatively, you can use the below SQL commands to check and ensure logical replication is enabled:
-- Check the replication type

SHOW wal_level;

-- Ensure logical replication is enabled

ALTER SYSTEM SET wal_level = logical;
Note that Postgres must be restarted after changing this config.If you’re using a managed Postgres service, there may be a setting for this in the relevant section of the service’s admin console.

2. Create a PowerSync database user

-- Create a role/user with replication privileges for PowerSync
CREATE ROLE powersync_role WITH REPLICATION BYPASSRLS LOGIN PASSWORD 'myhighlyrandompassword';
-- Set up permissions for the newly created role
-- Read-only (SELECT) access is required
GRANT SELECT ON ALL TABLES IN SCHEMA public TO powersync_role;  

-- Optionally, grant SELECT on all future tables (to cater for schema additions)
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO powersync_role; 
To restrict read access to specific tables, explicitly list allowed tables for both the SELECT privilege, and for the publication mentioned in the next step (as well as for any other publications that may exist).

3. Create powersync publication

-- Create a publication to replicate tables. The publication must be named "powersync"
CREATE PUBLICATION powersync FOR ALL TABLES;
Note that the PowerSync Service has to read all updates present in the publication, regardless of whether the table is referenced in your Sync Streams / Sync Rules definitions. This can cause large spikes in memory usage or introduce replication delays, so if you’re dealing with large data volumes, you’ll want to specify a comma-separated subset of tables to replicate instead of FOR ALL TABLES.
The snippet above replicates all tables and is the simplest way to get started in a dev environment.

Unsupported Hosted Postgres Providers

Due to the logical replication requirement, not all Postgres hosting providers are supported. Notably, some “serverless Postgres” providers do not support logical replication, and are therefore not supported by PowerSync yet.

See Also

MongoDB

Version compatibility: PowerSync requires MongoDB version 6.0 or greater.
For more information on migrating from MongoDB Atlas Device Sync to PowerSync, see our migration guide.

Permissions Required: MongoDB Atlas

For MongoDB Atlas databases, the minimum permissions when using built-in roles are:
readWrite@<your_database>._powersync_checkpoints
read@<your_database>
To allow PowerSync to automatically enable changeStreamPreAndPostImages on replicated collections (i.e. the Post Images setting for the MongoDB connection on your PowerSync instance is set to Auto-Configure, which is the default for new PowerSync instances), additionally add the dbAdmin permission:
readWrite@<your_database>._powersync_checkpoints
read@<your_database>
dbAdmin@<your_database>
If you are replicating from multiple databases in the cluster, you need read permissions on the entire cluster, in addition to the above:
readAnyDatabase@admin

Privileges Required: Self-Hosted / Custom Roles

For self-hosted MongoDB, or for creating custom roles on MongoDB Atlas, PowerSync requires the following privileges/granted actions:
  • listCollections: This privilege must be granted on the database being replicated.
  • find: This privilege must be granted either at the database level or on specific collections.
  • changeStream: This privilege must be granted at the database level (not on individual collections). In MongoDB Atlas, set collection: "" or check Apply to any collection in MongoDB Atlas if you want to apply this privilege on any collection.
    • If replicating from multiple databases, this must apply to the entire cluster. Specify db: "" or check Apply to any database in MongoDB Atlas.
  • For the _powersync_checkpoints collection add the following privileges: createCollection, dropCollection, find, changeStream, insert, update, and remove
  • To allow PowerSync to automatically enable changeStreamPreAndPostImages on replicated collections (i.e. the Post Images setting for the MongoDB connection on your PowerSync instance is set to Auto-Configure, which is the default for new PowerSync instances), additionally add the collMod permission on the database and all collections being replicated.

Post Images

To replicate data from MongoDB to PowerSync in a consistent manner, PowerSync uses Change Streams with post-images to get the complete document after each change. This requires the changeStreamPreAndPostImages option to be enabled on replicated collections. PowerSync supports three configuration options for post-images:
  1. Off: (post_images: off): Uses fullDocument: 'updateLookup' for backwards compatibility. This was the default for older instances. However, this may lead to consistency issues, so we strongly recommend enabling post-images instead.
  2. Auto-Configure: (post_images: auto_configure) The default for new instances: Automatically enables the changeStreamPreAndPostImages option on collections as needed. Requires the permissions/privileges mentioned above. If a collection is removed from Sync Streams (or legacy Sync Rules), you need to manually disable changeStreamPreAndPostImages on that collection.
  3. Read-only: (post_images: read_only): Uses fullDocument: 'required' and requires changeStreamPreAndPostImages: { enabled: true } to be set on every collection referenced in your Sync Streams/Sync Rules. Replication will error if this is not configured. This option is ideal when permissions are restricted.
To manually configure collections for read_only mode, run this command on each collection:
db.runCommand( {
 collMod: <collection>,
 changeStreamPreAndPostImages: { enabled: <boolean> }
} )
You can view which collections have the option enabled using:
db.getCollectionInfos().filter((c) => c.options?.changeStreamPreAndPostImages?.enabled);
Post-images can be configured for PowerSync instances as follows:

PowerSync Cloud:

Configure the Post Images setting in the database connection configuration in the PowerSync Dashboard. Select your project and instance and go to Database Connections to edit the connection settings.

Self-Hosted PowerSync:

Configure post_images in the service.yaml file.
If you need to use private endpoints with MongoDB Atlas, see Private Endpoints (AWS only).

MySQL

MySQL support is currently in a Beta release.
Version compatibility: PowerSync requires MySQL version 5.7 or greater.
PowerSync reads from the MySQL binary log (binlog) to replicate changes. We use a modified version of the Zongji MySQL binlog listener to achieve this.

Binlog Configuration

To ensure that PowerSync can read the binary log, you need to configure your MySQL server to enable binary logging and configure it with the following server command options:
  • server_id: Uniquely identifies the MySQL server instance in the replication topology. Default value is 1.
  • log_bin: ON. Enables binary logging. Default is ON for MySQL 8.0 and later, but OFF for MySQL 5.7.
  • enforce_gtid_consistency: ON. Enforces GTID consistency. Default is OFF.
  • gtid_mode: ON. Enables GTID based logging. Default is OFF.
  • binlog_format: ROW. Sets the binary log format to row-based replication. This is required for PowerSync to correctly replicate changes. Default is ROW.
  • binlog_row_image: FULL. Captures the complete row data for each change. This is required for PowerSync to correctly replicate changes. Default is FULL. The MINIMAL/NOBLOB options will be supported in a future release.
These can be specified in a MySQL option file:
server_id=<Unique Integer Value>
log_bin=ON
enforce_gtid_consistency=ON
gtid_mode=ON
binlog_format=ROW
binlog_row_image=FULL

Database User Configuration

PowerSync also requires a MySQL user with REPLICATION and SELECT privileges on the source databases. These can be added by running the following SQL commands:
-- Create a user with necessary privileges
CREATE USER 'repl_user'@'%' IDENTIFIED BY '<password>';

-- Grant replication client privilege
GRANT REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'repl_user'@'%';

-- Grant select access to the specific database
GRANT SELECT ON <source_database>.* TO 'repl_user'@'%';

-- Apply changes
FLUSH PRIVILEGES;
It is possible to constrain the MySQL user further and limit access to specific tables. Care should be taken to ensure that all the tables in your Sync Streams/Sync Rules are included in the grants.
-- Grant select to the users and the invoices tables in the source database
GRANT SELECT ON <source_database>.users TO 'repl_user'@'%';
GRANT SELECT ON <source_database>.invoices TO 'repl_user'@'%';

-- Apply changes
FLUSH PRIVILEGES;

Additional Configuration (Optional)

Binlog

The binlog can be configured to limit logging to specific databases. By default, events for tables in all the databases on the MySQL server will be logged.
  • binlog-do-db: Only updates for tables in the specified database will be logged.
  • binlog-ignore-db: No updates for tables in the specified database will be logged.
Examples:
# Only row events for tables in the user_db and invoices_db databases will appear in the binlog.
binlog-do-db=user_db
binlog-do-db=invoices_db
# Row events for tables in the user_db will be ignored. Events for any other database will be logged.
binlog-ignore-db=user_db

SQL Server

SQL Server support is currently in a Beta release.
Version compatibility: - PowerSync requires SQL Server 2019+ or Azure SQL Database. - SQL Server support was introduced in version 1.18.1 of the PowerSync Service.
PowerSync can replicate data from a change data capture (CDC) enabled SQL Server. The CDC process builds up change tables based on changes to tracked tables, by scanning the SQL Server transaction log on a fixed interval. PowerSync then polls these change tables using built-in stored procedures and applies the changes to the PowerSync bucket storage. For more information about CDC, see:

Supported Editions/Versions

DatabaseEditionVersionMin Service Tier
SQL Server 2019+Standard, Enterprise, Developer, Evaluation15.0+N/A
Azure SQL*Database, Managed instanceN/AAny service tier on vCore purchasing model. S3 tier and up on DTU purchasing model. See: Azure SQL Database compute requirements
* Azure SQL Database is always running on the latest version of the SQL Server DB Engine

Limitations / Known Issues

  • Spatial data types are returned as JSON objects as supplied by the Tedious node-mssql client. See the notes here.
  • There is an inherent latency in replicating data from SQL Server to PowerSync. See Latency for more details.

Database Setup Requirements

1. Enable CDC on the Database

Change Data Capture (CDC) needs to be enabled on the database:
-- Enable CDC on the database if not already enabled
USE <YOUR_DATABASE_NAME>; -- Only for SQL Server. To switch databases on Azure SQL, you have to connect to the specific database.
IF (SELECT is_cdc_enabled FROM sys.databases WHERE name = '<YOUR_DATABASE_NAME>') = 0
BEGIN
    EXEC sys.sp_cdc_enable_db;
END

2. Create the PowerSync Database User

Create a database user for PowerSync with the following permissions: Required permissions:
  • Read/Write permissions on the _powersync_checkpoints table
  • Read permissions on the replicated tables
  • cdc_reader role (grants access to CDC changetables and functions)
  • SELECT permission on the CDC schema (grants access to CDC metadata tables)
  • VIEW DATABASE PERFORMANCE STATE (SQL Server and Azure SQL)
  • VIEW SERVER PERFORMANCE STATE (SQL Server only)
Create the login for the user first. This is done on the server / master database level:
-- Create a SQL login for the powersync_user if missing. Note SQL Logins are created at the server level.
USE [master]; -- Use only works on SQL Server. For Azure SQL you have to connect to the master database to run these commands.
IF NOT EXISTS (SELECT 1 FROM sys.server_principals WHERE name = 'powersync_user')
BEGIN
    CREATE LOGIN [powersync_user] WITH PASSWORD = 'YOUR_DB_USER_PASSWORD', CHECK_POLICY = ON;
END
Create the database user next. This is done on the specific database level:
-- Create the powersync_user database user if missing. Note DB users are created at the database level.
USE [<YOUR_DATABASE_NAME>]; -- Use only works on SQL Server. For Azure SQL you have to connect to the specific database to run these commands.
IF NOT EXISTS (SELECT 1 FROM sys.database_principals WHERE name = 'powersync_user')
BEGIN
    CREATE USER [powersync_user] FOR LOGIN [powersync_user];
END
Grant the necessary permissions for the user:
-- Grant SELECT on the specific replicated tables
GRANT SELECT ON dbo.<YOUR_TABLE_NAME> TO [powersync_user];

-- Grant access to CDC tables and functions using the cdc_reader role
IF IS_ROLEMEMBER('cdc_reader', 'powersync_user') = 0
BEGIN
    ALTER ROLE cdc_reader ADD MEMBER powersync_user;
END

-- Grant select on the CDC schema
GRANT SELECT ON SCHEMA::cdc TO [powersync_user];

-- Grant the necessary permissions to the user to access the performance state views

-- Note: For Azure SQL, only VIEW DATABASE PERFORMANCE STATE is required. Granted at the database level.
-- PowerSync uses this to access the sys.dm_db_log_stats DMV and the sys.dm_db_partition_stats DMV
GRANT VIEW DATABASE PERFORMANCE STATE TO [powersync_user];

-- VIEW SERVER PERFORMANCE STATE is only necessary on SQL Server (not Azure SQL). Granted at the server/master database level.
-- PowerSync requires this permission to access the sys.dm_db_log_stats DMV on SQL Server.
USE [master];
BEGIN
    GRANT VIEW SERVER PERFORMANCE STATE TO [powersync_user];
END
For Azure SQL Database, the VIEW SERVER PERFORMANCE STATE permission is not available and not required. Only VIEW DATABASE PERFORMANCE STATE is needed.

3. Create the PowerSync Checkpoints Table

PowerSync requires a _powersync_checkpoints table to generate regular checkpoints. CDC must be enabled for this table:
-- Create the PowerSync checkpoints table in your schema
IF OBJECT_ID('dbo._powersync_checkpoints', 'U') IS NULL
BEGIN
CREATE TABLE dbo._powersync_checkpoints (
    id INT IDENTITY PRIMARY KEY,
    last_updated DATETIME NOT NULL DEFAULT GETUTCDATE()
);
END

-- Enable CDC for the powersync checkpoints table if not already enabled
-- Note: the cdc_reader role created the first time CDC is enabled on a table
IF NOT EXISTS (SELECT 1 FROM cdc.change_tables WHERE source_object_id = OBJECT_ID(N'dbo._powersync_checkpoints'))
BEGIN
    EXEC sys.sp_cdc_enable_table
        @source_schema = N'dbo',
        @source_name   = N'_powersync_checkpoints',
        @role_name     = N'cdc_reader',
        @supports_net_changes = 0;
END
Grant read/write access to the table for the powersync_user:
GRANT SELECT, INSERT, UPDATE ON dbo._powersync_checkpoints TO [powersync_user];

4. Enable CDC on Tables

CDC must be enabled for all tables that need to be replicated:
-- Enable CDC for specific tables in your schema if not already enabled
IF NOT EXISTS (SELECT 1 FROM cdc.change_tables WHERE source_object_id = OBJECT_ID(N'dbo.<YOUR_TABLE_NAME>'))
BEGIN
    EXEC sys.sp_cdc_enable_table
        @source_schema = N'dbo',
        @source_name   = N'<YOUR_TABLE_NAME>',
        @role_name     = N'cdc_reader',
        @supports_net_changes = 0;
END
Repeat this for each table you want to replicate. Note that PowerSync does not currently use the net changes functionality so @supports_net_changes can be set to 0.

CDC Management

Management and performance tuning of CDC is left to the developer and is primarily done by modifying the change capture jobs. See Change Data Capture Jobs (SQL Server) for more details. Capture Job settings of interest to PowerSync:
  • Polling Interval: The frequency at which the capture job reads changes from the transaction log. Default is every 5 seconds. Can be set to 0 so that there is zero downtime between scans, but this will impact database performance.
  • Max Trans: The maximum number of transactions that are processed per scan. Default is 500.
  • Max Scans: The maximum number of scans that are performed per capture job scan cycle. Default is 10.
Cleanup Job settings of interest to PowerSync:
  • Retention: The retention period before data is expired from the CDC tables. Default is 3 days. If your PowerSync instance is offline for longer than this period, data will need to be fully re-synced. Specified in minutes.
Recommended Capture Job settings:
ParameterRecommended Value
maxtrans5000
maxscans10
pollinginterval1 second
For Azure SQL Database, the CDC capture and cleanup jobs are managed automatically. Manual configuration is greatly limited. See Azure CDC Customization Limitations. The main limitation is that the capture job polling interval cannot be modified and is fixed at 20 seconds. It is, however, still possible to manually trigger the capture job on demand.

Latency

Due to the fundamental differences in how CDC works compared to logical replication (Postgres) or binlog reading (MySQL), there is an inherent latency in replicating data from SQL Server to PowerSync. The latency is determined by two factors:
  1. Transaction Log Scan Interval: The frequency at which the CDC capture job scans the transaction log for changes. The default value of 5 seconds can be changed by modifying the capture job settings on SQL Server. The recommended value is 1 second, but this can also be set to 0 based on the database load. For Azure SQL Database, the default value is 20 seconds and cannot be changed. See Azure CDC Customization Limitations for more details.
  2. Polling Interval: The frequency at which PowerSync polls the CDC change tables for changes. The default value is once every 1000ms. This can be changed by setting the pollingIntervalMs parameter in the PowerSync configuration.

Memory Management

During each polling cycle, PowerSync will read a limited number of transactions from the CDC change tables. The default value of 10 transactions can be changed by setting the pollingBatchSize parameter in the PowerSync configuration. Increasing this will increase throughput at the cost of increased memory usage. If the volume of transactions being replicated is high, and memory is available, it is recommended to increase this value.
Connection configuration parameters for the PowerSync SQL Server Adapter like pollingIntervalMs and pollingBatchSize can currently only be set when self-hosting PowerSync. See SQL Server Additional Configuration for more details. We are planning to expose these settings for SQL Server source database connections in the PowerSync Dashboard for PowerSync Cloud instances.

Convex

Convex support is currently in an Open Alpha release. APIs, configuration, schema-change handling, metrics, and replication behavior may change before this connector is considered stable.
PowerSync reads Convex data using the Convex Streaming Export API. Initial replication pins a single Convex snapshot cursor and snapshots each selected table at that cursor. Streaming replication then reads the global document_deltas stream and filters rows according to your Sync Streams. Convex does not support user-defined database schemas or namespaces in the same way as SQL databases. In Sync Streams, use the default convex schema when qualifying source tables.

Connection Requirements

PowerSync requires:
  • A Convex deployment URL.
  • A Convex deploy key. In the Convex Dashboard, go to SettingsGeneral and generate a deploy key for the deployment PowerSync should replicate.
  • The powersync_checkpoints table and createCheckpoint mutation described below.
Convex deploy keys grant full read and write access to your Convex data. Use a deploy key for the correct environment, store it as a secret, and rotate it if it is exposed.

Checkpoint Table

PowerSync uses a small Convex table to generate write checkpoint markers. Convex table names cannot start with _, so the table is named powersync_checkpoints. Add the table to your Convex schema:
convex/schema.ts
import { defineSchema, defineTable } from 'convex/server';
import { v } from 'convex/values';

export default defineSchema({
  // ... your other tables

  powersync_checkpoints: defineTable({
    last_updated: v.float64()
  })
});

Checkpoint Mutation

Deploy a Convex mutation named powersync_checkpoints:createCheckpoint. PowerSync calls this mutation after recording a write checkpoint so the Convex delta stream advances even when the app is otherwise idle.
convex/powersync_checkpoints.ts
import { mutation } from './_generated/server';

export const createCheckpoint = mutation({
  args: {},
  handler: async (ctx) => {
    const existing = await ctx.db.query('powersync_checkpoints').first();

    if (existing) {
      await ctx.db.patch(existing._id, { last_updated: Date.now() });
    } else {
      await ctx.db.insert('powersync_checkpoints', { last_updated: Date.now() });
    }
  }
});
PowerSync excludes powersync_checkpoints from replicated source tables. The table exists only to advance the replication cursor for write checkpoint acknowledgements.

Client Writes

PowerSync does not write application data directly to Convex. Your app still needs an upload path that takes queued client-side writes and applies them through Convex mutations. In most Convex apps, you already define one or more mutation functions for each writable table. Your PowerSync backend connector can call those same mutations from uploadData(). If you use Convex Auth tokens directly for PowerSync client authentication, configure PowerSync to accept the convex JWT audience. For self-hosted development this is configured in client_auth.audience; for PowerSync Cloud you can configure a custom audience in the instance settings. See Custom Authentication.

Schema Changes

Convex’s schema endpoint does not expose a monotonic schema version or checkpointable schema cursor. PowerSync uses the schema endpoint for table discovery and diagnostics, but it does not continuously diff Convex schema versions. When you add, remove, or change Convex fields or tables:
  1. Update your Convex schema and deploy it.
  2. Update and redeploy your Sync Config.
Convex schema discovery can omit fields that have no stored values yet. If a field is defined in your Convex schema but no document currently contains that field, it may not appear in PowerSync schema diagnostics until data exists for it. When that happens, PowerSync falls back to runtime value inspection for later rows.

Sparse Fields and Int64 Values

Convex Int64 values arrive in raw documents as base-10 strings. PowerSync can convert those strings to SQLite integers when Convex json_schemas reports the field as an Int64, but sparse fields can be missing from json_schemas until a document contains a value for them. Without schema metadata, an Int64 string is indistinguishable from a regular string. To keep synced values consistent, cast Convex Int64 fields to TEXT in your Sync Streams. See the examples below.

Latency

PowerSync polls the Convex document_deltas endpoint for changes. This means there is an inherent latency between a write being committed in Convex and that change being replicated into PowerSync. The polling interval defaults to once every 1000ms. When self-hosting PowerSync, you can configure this with the polling_interval_ms connection parameter:
service.yaml
replication:
  connections:
    - type: convex
      deployment_url: https://<your-deployment>.convex.cloud
      deploy_key: <your-deploy-key>
      polling_interval_ms: 1000
Lowering this value can reduce replication latency, but it increases the number of requests made to Convex and the work performed by the PowerSync Service.

Limitations

  • Convex support is currently limited to the default Convex component.
  • Convex json_schemas does not expose a schema change token or revision cursor that can be checkpointed.
  • Convex json_schemas can omit fields until stored data exists for those fields. This can affect type inference for optional or sparsely populated fields.
  • Convex Int64 and Bytes values are ambiguous in raw JSON documents without schema metadata. Cast Int64 fields to TEXT in Sync Streams when you need stable client-side types.
  • PowerSync reports time-based replication lag for Convex, but not byte-based lag.

Sync Streams Examples

Use the default convex schema when querying Convex tables.
Convex document IDs are generated by Convex and are exposed as _id. Clients cannot create Convex IDs before inserting documents into Convex. PowerSync clients need stable local IDs before writes are uploaded, so use a client-generated UUID column as the synced id and keep _id as the Convex server-generated document ID. This is similar to the pattern described in Sequential ID Mapping.
The client creates a UUID in its local id column before the write is uploaded. Your Convex mutation should store that value in a separate uuid field on the Convex document. PowerSync then syncs uuid AS id back to the client, so the client keeps the same stable local ID while Convex keeps its own server-generated _id. The example below uses one stream for a user’s lists and todos:
  • uuid as the synced client-side id instead of the Convex _id.
  • list_uuid as the synced relationship column instead of the Convex list_id.
  • CAST(an_int64_column AS TEXT) to keep a Convex Int64 value stable on the client.
  • substring(auth.user_id(), 1, 32) to extract the Convex user ID from a Convex Auth JWT subject. Convex Auth subjects include the 32-character user ID followed by | and the user session ID.
config:
  edition: 3

streams:
  user_data:
    with:
      # Extract the Convex user ID from the JWT subject.
      # Convex Auth subjects include `[32 character user ID]|[user session ID]`.
      user_lists: |
        SELECT uuid
        FROM convex.lists
        WHERE archived != true
          AND owner_id = substring(auth.user_id(), 1, 32)
    auto_subscribe: true
    queries:
      - |
        SELECT
          -- The client creates `uuid`, which becomes the client's `id` column.
          -- Keep Convex `_id` as the server-generated document ID.
          uuid AS id,
          name,
          owner_id
        FROM convex.lists
        WHERE uuid IN user_lists
      - |
        SELECT
          -- Use the client-generated todo UUID as the synced `id`.
          uuid AS id,
          description,
          -- Map relationships that use Convex IDs, such as `list_id`,
          -- to the related table's local UUID column.
          list_uuid,
          -- Cast Convex Int64 values to TEXT to avoid inconsistent inferred types
          -- when json_schemas does not include sparse field metadata.
          CAST(an_int64_column AS TEXT) AS an_int64_column
        FROM convex.todos
        WHERE list_uuid IN user_lists

Next Step

Next, connect PowerSync to your database:

PowerSync Cloud

Self-Hosted PowerSync