Skip to main content
Prequel Import helps you reliably sync your customer’s data from their preferred data platform into your application. Prequel can help you power this as an embedded feature, reliably and securely across data platforms, arbitrary data models, and at extremely high volumes. Prequel Import architecture

Requirements

To use Prequel Import, you will need to provide an object storage bucket (S3, GCS, or Azure) and implement support for at least one possible endpoint type: spec-compliant API endpoint or Kafka topic.
Receive individual records as they change via a spec-compliant API endpoint.Learn more about the Dataset API Specification

Configuring Prequel Import

Prequel Import is configured by defining the possible datasets or endpoints that you want to allow your users to push data into. Once configured, you can begin connecting to customers or data “providers” and start syncing data.

Understanding Prequel Import architecture

Prequel Import works by regularly detecting changes in the provider’s source, and reliably delivering those changes to the designated endpoint. Architecturally, this works by connecting to the source, maintaining a secure cache of the state of the source in object storage, and resiliently pushing detected changes to the designated endpoints. To learn more, also see the Dataflow guide.

Exposing Prequel Import to your users

Prequel Import is optimized for a fully embedded experience, built natively into your platform. To support that implementation method, Prequel Import is API first, with a variety of developer tools. Besides the fully embedded UX, onboarding data providers can also be accomplished using our Admin UI or Magic Links. To learn more, see the Customer Experience guide.