← Back to Home
EU AI Act — Technical Documentation
March 2026 | Classification: Limited-Risk AI System
1. System Overview
Provider: Nexus Automate AI, S.L. (NIF: B26956565), Spain
System name: Nexus Automate AI Platform
Version: NX-3 (ML ensemble)
Intended purpose: AI-driven e-commerce optimization software that provides behavioral analytics, personalized customer engagement, automated support, and predictive decision-making for online retailers.
2. Risk Classification
Under Regulation (EU) 2024/1689 (EU AI Act), the Nexus Automate platform is classified as a limited-risk AI system (Article 50 — transparency obligations).
Rationale for classification:
- The system does not fall under Annex III (high-risk) categories. It does not operate in the domains of biometric identification, critical infrastructure, education, employment, essential services, law enforcement, migration, or administration of justice.
- The system does interact with natural persons (e-commerce customers) through AI-generated content including chat responses, product recommendations, and engagement messages, triggering transparency obligations under Article 50.
- The system uses automated profiling (behavioral analysis and customer segmentation), which is addressed through GDPR Article 22 compliance rather than high-risk AI Act provisions.
Not classified as high-risk because:
- No biometric processing or identification
- No decisions affecting legal rights, access to essential services, or employment
- No use in law enforcement, migration, or judicial contexts
- E-commerce recommendations and engagement do not constitute decisions with significant effects on natural persons within the meaning of Annex III
3. Transparency Measures (Article 50 Compliance)
The following transparency measures are implemented:
3.1 AI Interaction Disclosure
- All AI-generated chat messages are clearly labeled as AI-generated (e.g., “AI Assistant” branding, disclosure banners).
- Customers are informed when they are interacting with an AI system, not a human.
- Option to request human handoff is available at any point during AI chat interactions.
3.2 AI Content Marking
- AI-generated product recommendations are identified as personalized suggestions.
- AI-generated engagement messages (cart recovery, re-engagement) are labeled as automated communications.
- No AI-generated content is presented as human-authored.
3.3 Explainability
- Customers can request a human-readable explanation of any AI decision made about them (e.g., why a particular recommendation was shown, why a particular engagement was triggered).
- Platform administrators have access to decision audit trails showing the factors that contributed to each AI action.
- The NX-3 ensemble architecture maintains interpretable scoring layers that can be queried for decision rationale.
4. Technical Architecture Summary
4.1 AI Components
| Component |
Technology |
Purpose |
| NX-3 ML Ensemble |
Custom neural networks |
Behavioral scoring, churn prediction, recommendations |
| RAG Pipeline |
Vector search + graph traversal + semantic caching |
Knowledge retrieval for AI support |
| LLM Integration |
Google Vertex AI (Gemini), Groq (fallback) |
Natural language generation for chat and engagement |
| Batch Processing |
Moonshot AI (Kimi) |
Catalog processing and synthetic content (no personal data) |
| Strategy Engine |
Thompson Sampling, reinforcement learning |
Self-learning strategy optimization |
4.2 Data Flow Safeguards
- PII Stripping: Automatic detection and removal of personal identifiers before any data is sent to external AI providers.
- Pseudonymization: Customer identifiers are replaced with pseudonymous tokens in behavioral data and audit logs.
- Consent Gating: No AI processing occurs without verified consent status. The platform enforces granular consent categories (analytics, profiling, AI engagement, marketing).
- Human-in-the-Loop: Strategy engine recommendations require administrator approval before deployment. Automated decisions can be overridden at any time.
5. Human Oversight
In accordance with the EU AI Act’s emphasis on human oversight:
- Administrator dashboard: Full visibility into all AI decisions, strategy selections, and engagement outcomes.
- Approval gates: New AI-learned strategies require explicit human approval before activation.
- Kill switches: Any AI component can be disabled independently without affecting other platform functionality.
- Override capability: Administrators can override any individual AI decision or bulk-disable specific decision categories.
- Audit trails: Complete logging of all AI decisions with timestamps, input data, confidence scores, and outcomes.
6. Monitoring and Updates
- Performance monitoring: Prometheus and Grafana dashboards track model performance, accuracy, and drift indicators.
- Bias monitoring: Regular analysis of AI decision patterns across customer segments to detect and correct potential biases.
- Documentation updates: This document is reviewed and updated with each significant platform release or change in AI components.
- Regulatory tracking: Nexus Automate monitors EU AI Act implementing acts and guidance from the European AI Office to ensure ongoing compliance.
Contact for AI Act inquiries: anshika@nexus-automate.com