1. Purpose
This Policy sets out how Ascendix Composer (“Application”) processes user data when you use its AI-powered features, including transmission of customer content to third-party large language model (“LLM”) providers. It is intended to ensure transparency, support compliance with applicable data protection laws, and help customers use these features securely and responsibly.
2. Key Definitions
- Customer – the organization that has licensed or subscribed to Ascendix Composer.
- User – an individual authorized by the Customer to use the Application.
- Customer Content – any data, files, documents, templates, variables, or schemas provided or generated by Users within the Application.
- LLM Provider – OpenAI, Google, and Anthropic that offers large language model processing used by the Application (e.g., for template generation, variable detection, or mapping).
3. Scope
This Policy applies to all users who enable or access AI features in Ascendix Composer, including but not limited to:
- Uploading and converting PDFs to HTML templates;
- Automatic detection and mapping of dynamic variables;
- AI-driven schema generation and mapping to Salesforce or other data sources.
If you do not use these AI features, your data will not be transmitted to LLM providers under this Policy.
4. Description of AI Processing
When AI features are enabled:
- Customer Content (e.g., PDF content, HTML templates, template variables, sample records, mapping schemas) may be sent from the Application to the LLM Provider for:
a. Template generation and transformation;
b. Detection and suggestion of dynamic variables;
c. Proposing field mappings and schema relationships.
2. The LLM Provider processes this content to generate responses that are returned to the Application and shown to Users, who remain responsible for validating and accepting or rejecting AI suggestions.
5. Unmasked Data Transmission
-
No masking or tokenization
In the current version of the AI feature, Customer Content is transmitted to LLM Providers in its original, unmasked form. This may include business information, free-text notes, and any other content that Users upload or type into the AI workflows. -
Prohibited data types.
Since LLM Providers store Customer Content in accordance with their own retention policies and contractual commitments, users should submit the following categories of data to AI features with great caution:
a. Personal data that is classified as sensitive or special category (e.g., health data, biometric data, data on political opinions, etc.);
b. Payment card data or full financial account identifiers;
c. Government-issued identification numbers (e.g., SSN, passport number);
d. Any data regulated by sector-specific laws (e.g., PHI under HIPAA, payment data subject to PCI-DSS), unless the Customer has independently verified that such use is compliant.
e. Customers are responsible for ensuring their own data classification and governance policies to permit use of the AI feature for the data types they choose to process.
6. Legal Roles and Responsibilities
- The Customer determines the business purposes for which the Application is used and decides which systems, documents, and records to connect or upload (e.g., which PDFs to process, which Salesforce objects and fields to include in templates or mappings).
- The Customer is solely responsible for the documents and data you upload to the Service, including ensuring that you have all necessary rights, licenses, and consents to use such content and to allow us to process it.
- Ascendix, as the provider of the Application, generally acts as a data controller and processor (or service provider) on behalf of the Customer.
- LLM Providers engaged by Ascendix operate as subprocessors or service providers to Ascendix, under separate contractual terms (including data protection clauses) that govern how they handle Customer Content.
7. User Consent and Transparency
-
Consent / Acknowledgement.
Before a User first enables or uses AI features, the Application will present a clear notice explaining that Customer Content will be sent to third-party LLM Providers without masking, along with a link to this Policy.
-
Ongoing transparency.
The Application will clearly label AI-generated content, suggestions, and mappings so that Users can distinguish them from manually created content.
8. Data Minimization
- Only the minimum data reasonably necessary to perform the requested AI operation will be transmitted to the LLM Provider.
- Where feasible, the Application will design prompts and requests to LLMs to avoid including unnecessary data (for example, sending representative record samples rather than full datasets).
- Users are encouraged to review and, where possible, redact or generalize information in documents before submitting them to the AI feature.
9. Data Retention and Logging
- Within the Application.
a. The Application does not store or cache AI’s retention policy:
i. The uploaded PDF is deleted right after successful template creation based on it;
ii. In case of any errors during template creation, the PDF is stored for 1 day and permanently deleted after;
iii. The screenshots of the PDF pages made for layout reproduction are stored for 7 days and permanently deleted after.
2. At the LLM Provider
a. LLM Providers store Customer Content in accordance with their own retention policies and contractual commitments.
b. Ascendix will use commercially reasonable efforts to select LLM Providers that do not use Customer Content submitted through the Application for training their models, unless the Customer has explicitly agreed otherwise.
10. Security Controls
Access Control
- Actions that result in data being transmitted to LLM Providers are logged with relevant metadata for audit and security review.
- Staff with appropriate role-based permissions may access the service and access logs.
Transmission Security
- All data transmitted between the Application and LLM Providers is encrypted in transit using industry-standard protocols (e.g., TLS 1.2 or higher).
Provider Due Diligence
- Ascendix will conduct appropriate due diligence on LLM Providers, including review of their Terms of Service, security posture, and Privacy Policies.
Error Handling
- Error messages and logs generated by the AI feature will be designed to avoid exposing sensitive Customer Content. Where necessary, errors will be generalized or redacted.
11. Service Availability
We aim to keep the Application running smoothly at all times. The AI feature of the Application uses 3rd party service providers (LLM Providers). If your usage of these services creates unusually high costs, we may need to pause or slow down your requests until usage returns to normal (for example, when limits reset) or we agree on extra capacity together. If this happens, we’ll do our best to let you know quickly and work with you on options to continue your service.
12. User and Customer Responsibilities
- Users must comply with their organization’s internal data classification, privacy, and information security policies when submitting content to AI features.
- Users must not:
a. Upload prohibited data types as described in Section 5;
b. Attempt to bypass technical safeguards or security controls;
c. Use the AI feature to generate or process content that violates applicable laws or internal policies.
13. Transparency, Limitations, and User Guidance
- The Application will provide guidance and examples of appropriate and inappropriate data for AI processing.
- AI outputs may be incomplete, incorrect, or misleading and must always be reviewed and validated by Users before being used in production templates, customer-facing documents, or business decisions.
- The AI feature is intended as an assistive tool and does not replace professional judgment, human review, or domain expertise.
14. Incident Response
- Any suspected or actual security incident or data breach involving the AI feature will be handled under the LLM Provider’s response plan, which may include:
- Investigation and containment;
- Notification of affected Customers where required;
- Cooperation with Customers in fulfilling any legally required notifications to data subjects or authorities.
- Users must promptly report any suspected misuse, unexpected AI behavior, or potential data exposure via the Bug reporting feature at the Application or to Ascendix Concierge (concierge@ascendix.com), in accordance with agreed support procedures.
15. Changes to this Policy
Ascendix may update this Policy from time to time, for example, to reflect:
- Changes in the AI functionality;
- Adoption of new LLM Providers;
- Updates in applicable law or industry standards.
16. Best practices for higher accuracy
These recommendations improve the likelihood of accurate results, but do not guarantee perfect output. Users remain responsible for reviewing and validating generated content before using it in business workflows.
- PDF quality
- Prefer digital PDFs directly from systems, not photos or multi-scan copies.
- Avoid heavily compressed, blurry, or skewed scans.
- Avoid complicated designs and layout decisions.
- Layout & structure
- Use clear headings, sections, and labels.
- Avoid excessive overlapping elements, merged cells, or random text boxes.
- Tables should be reasonably structured: one row per item, clear column headers.
- Language consistency
- Use one primary language per document.
- Avoid mixing too many languages or scripts in a single section.
- Content density
- Avoid putting too much tiny text on a single page.
- Separate different parts into different pages/sections when possible.
17. Disclaimer
This Policy is provided for transparency and product documentation purposes and does not constitute legal advice. Customers are responsible for obtaining their own legal review to ensure that the use of AI features in Ascendix Composer aligns with their regulatory and contractual obligations.