Scope
What we do: Hands‑on advisory and execution for AI adoption and governance—policy, approval gates, logging, evaluations, and ROI evidence once controls are live.
What we don't do: We don’t host production systems, store client credentials, or train models on client data by default. If a project needs sensitive data, we execute a written data‑use plan and NDA/BAA first.
Data Handling & Privacy Practices
- Collection: Business‑contact details, scheduling info, and project artifacts only. Sensitive information is removed from artifacts.
- Storage: Encrypted cloud storage (Google Workspace or Microsoft 365) with MFA and least‑privilege access.
- Use of AI tools: No client‑identifiable data is sent to public AI APIs without written consent. Private endpoints and redaction by default.
- Prompt/output logs: By default, we redact PII/secrets and retain evaluation logs for 90 days unless otherwise agreed.
- Deletion: At project close, we remove raw notes and working files unless the contract specifies retention for audit evidence.
Security Hygiene
- All accounts protected by MFA; laptops use full‑disk encryption and automatic screen lock.
- Passwords stored in a business‑grade password manager; secrets are never shared over email or chat.
- Patches applied within 30 days; high‑severity updates prioritized sooner.
- Deliverables shared via permissioned links; downloads time‑limited where possible.
Ethical Governance & Standards
Our methods map to public frameworks. We keep humans in the loop for high‑impact decisions. Every pilot has clear success criteria, measured value, and exportable evidence (factsheets, approvals, evaluation results). If the data doesn’t support safety or value, we pause, adjust, or stop.
Framework Alignment: “Alignment” means our methods map to public frameworks; it does not imply certification or endorsement by NIST, ISO, the European Commission, or AICPA.
Evidence available on request:
- Sample AI use policy, approval‑gate checklist, and redaction settings.
- Example evaluation report (quality, safety/bias where applicable, latency, cost) with pass/fail thresholds.
- Control‑mapping sheet to NIST AI RMF and ISO/IEC 42001, with EU AI Act obligations noted where relevant.
Contact & Responsible Disclosure
Questions about privacy, security, or governance? Email security@onrampgrc.com or use the contact form.
For security researchers: If you believe you’ve found a vulnerability, please send details to the same address with “VULN” in the subject. We acknowledge receipt within 5 business days and coordinate a responsible fix.