Trust & Transparency

We help organizations adopt AI safely, govern it responsibly, and prove ROI with evidence.

Scope

What we do: We advise and guide hands-on execution for AI adoption and AI governance – training, policy design, pilot planning, approval gates, logging, evaluations, and ROI tracking.

What we don't do: We do not host production systems, store client credentials, or train models on client data by default. If a project requires handling sensitive data, we execute a written data-use plan and NDA/BAA first.

Data Handling & Privacy Practices

  • Collection: We keep only business-contact details, scheduling info, and project artifacts needed to do the work. All sensitive information is removed from artifacts.
  • Storage: Client materials are kept in encrypted cloud storage (e.g., Google Workspace or Microsoft 365) with MFA and least-privilege access.
  • Use of AI tools: We do not send client-identifiable data to public AI APIs without written consent. We favor private endpoints and redaction by default.
  • Prompt/output logs: By default, we redact PII/secrets and retain evaluation logs for 90 days unless a different retention is agreed upon.
  • Deletion: Upon project close, we remove raw notes and working files unless the contract specifies retention (e.g., for audit evidence).

Security Hygiene

  • All accounts are protected by MFA. Laptops use full-disk encryption and automatic screen lock.
  • Passwords are stored in a business-grade password manager and secrets are never shared over email or chat.
  • Patches are applied within 30 days with high-severity updates prioritized sooner.
  • Client deliverables are shared via permissioned links and download access is for a limited time when possible.

Ethical Governance & Standards

We align our methods to well-known frameworks and keep humans in the loop for high-impact decisions. Every pilot has clear success criteria, measured ROI, and evidence you can export (factsheets, approval records, evaluation results). When the data doesn't support value or safety, we pause, adjust, or stop.

Framework Alignment: "Alignment" indicates our methods map to public frameworks; it does not imply certification or endorsement by NIST, ISO, the European Commission, or AICPA.

The following evidence is available on request:

  • A sample AI Use Policy, approval-gate checklist, and redaction settings.
  • Example evaluation report (accuracy, safety, latency, cost) with pass/fail thresholds.
  • Control-mapping sheet showing how deliverables align to NIST AI RMF, ISO/IEC 23894, ISO/IEC 42001/42005, and EU AI Act obligations.

Contact & Responsible Disclosure

Questions about privacy, security, or governance? Email security@onrampgrc.com or use the contact form on our site.

For security researchers: if you believe you've found a vulnerability, please send details to the same address with "VULN" in the subject. We acknowledge receipt within 5 business days and coordinate a responsible fix.