AI Ethics and Safety

Sixth City AI: Built on Trust, Not Just Tech

At Sixth City AI, we don’t treat “responsible by default” as a tagline. It’s the foundation of how we work—guiding our advice to clients, shaping how we build tools, and anchoring every decision we make about future models.

We’re grounded in the practical mindset that comes with being built in Cleveland. That means focusing less on flashy promises and more on doing the right thing—every time. Over the years, we’ve taken the best lessons from industry leaders and shaped them into six working principles that guide our team:

Follow the Law
Every AI solution we deliver respects data privacy, security, and the laws that govern them—from HIPAA and GDPR to the new EU AI Act. Compliance isn’t optional.

Build it Safe
Before anything goes live, we test our models and prompts for weak spots—hallucinations, jailbreaks, edge-case breakdowns. If the stakes are high, we add extra layers of protection.

Keep it Fair
We check the data and results for bias. If something seems off, we fix it. The goal is to make AI that supports people—not marginalizes them.

Be Clear With Clients
No black boxes. Every project comes with a plain-English system brief that lays out how things work, what the risks are, and where humans stay in control.

People Make the Final Call
There’s always a named decision-maker on the client side—someone who can hit pause if needed. Machines never have the last word.

Own the Responsibility
We document who did what and what happens if something goes wrong. Our founder, Edward Jacak, acts as our AI Ethics Officer and leads our response when issues arise.

We think of AI as a kind of time-credit machine—giving small teams the space to think, solve problems, and serve customers better. But none of that matters if it isn’t safe. Ethics isn’t a checkbox here. It’s the promise that we’re building tools people can trust.

Sixth City AI — AI Ethics & Safety Policy

Effective Date: 12 May 2025

Sixth City AI, Sixth City Tech and Sixth City Technologies, LLC AI Ethics & Safety policy is detailed below in the FAQ.

This policy sets out how Sixth City Technologies, LLC (d/b/a Sixth City AI, Sixth City Tech) designs, builds, deploys, and maintains artificial-intelligence solutions in a manner that is technically sound, legally compliant, and respectful of the people who use—or are affected by—our work.

The policy applies to all AI-related activity conducted by Sixth City Technologies, including:

  • Advisory and implementation projects delivered by the Sixth City AI division.

  • Internal automations, research prototypes, and proofs of concept.

  • Any future first-party or co-developed models, agents, or toolchains.

Our approach is inspired by the public commitments of leading providers such as OpenAI, yet tailored to the needs of smaller enterprises that rely on clear, pragmatic guidance.

Principle What It Means in Practice
Lawfulness Every solution complies with applicable privacy, security, and sector-specific regulations (e.g., GDPR, HIPAA, CCPA, EU AI Act draft requirements).
Robustness & Safety Models, prompts, and workflows undergo internal red-team testing and stress scenarios before launch; high-risk use cases trigger additional safeguards.
Fairness & Non-Discrimination We check training data (where accessible) and outputs for harmful bias and put in place mitigation strategies or guardrails.
Transparency to Clients For each engagement we deliver a plain-language “System Fact Sheet” outlining model purpose, data sources, limitations, and expected oversight duties.
Human Oversight A named client contact retains authority to review, accept, or override AI decisions; fully autonomous operation is delivered only with explicit sign-off.
Accountability Roles, escalation paths, and audit trails are documented so that issues can be traced, understood, and corrected quickly.

  • AI Ethics Officer – Edward Jacak (Founder) is the single point of responsibility for interpreting this policy, granting risk exceptions, and resolving ethics inquiries.

  • Delivery Leads – Every client project is assigned a Delivery Lead who owns day-to-day compliance with this policy and reports issues to the AI Ethics Officer.

  • Domain Reviewers – Senior engineers and subject-matter advisors perform technical and legal checks as part of the internal review process.

  • Concept Brief
    Document intended purpose, user groups, success metrics, and potential risks.

  • Risk Triage
    Classify the project as Low / Medium / High based on data sensitivity, autonomy level, and potential societal impact.

  • Design Controls
    Choose models, data pipelines, and guardrails suited to the risk class.

  • Internal Red-Team Testing
    Stress-test prompts and workflows for misuse, jailbreaks, and unsafe outputs.

  • Client Fact Sheet & Sign-Off
    Provide a one-page summary (under NDA) covering model limitations, monitoring obligations, and fallback mechanisms.

  • Deployment & Monitoring
    Enable logging, rate-limiting, and performance dashboards; flag anomalies to the Delivery Lead and, when material, to the AI Ethics Officer.

  • Post-Launch Review
    Collect user feedback, measure KPIs, and schedule prompt or model updates as part of ongoing fractional-manager support where applicable.

While we do not pursue formal certification today, our controls mirror the spirit and major provisions of:

  • GDPR (EU/UK) and CCPA/CPRA (California) for data privacy.

  • HIPAA (U.S. health data) when handling Protected Health Information.

  • SOC 2 principles for security, availability, and confidentiality.

  • Draft language from the EU AI Act regarding risk categorization, transparency, and human oversight.

Because most of our work is bespoke and commercially sensitive, we do not publish public “model cards.” Instead we supply each client with:

  • A System Fact Sheet (purpose, inputs, known limits, human-in-the-loop checkpoints).

  • A Compliance Appendix mapping relevant laws to the controls in place.

Both documents are provided under existing NDAs and may be updated during the engagement.

  • Detection – Automated alerts flag abnormal output patterns, latency spikes, or usage deviations.

  • Initial Containment – The Delivery Lead pauses or rate-limits the affected component if user safety or legal exposure is suspected.

  • Notification – Material incidents are reported to the client project owner and the AI Ethics Officer within 24 hours.

  • Root-Cause Analysis – A five-whys review determines whether model drift, prompt change, data corruption, or misuse is at fault.

  • Remediation & Documentation – Fixes are implemented, tested, and logged; the incident archive is retained for seven years.

All technical and client-facing staff complete annual training covering:

  • Principles of responsible AI and bias mitigation.

  • Data-handling and privacy obligations.

  • Escalation paths for ethics or safety concerns.

The AI Ethics Officer reviews this document at least once per year, or sooner if legal requirements change. Revisions are circulated internally and the “Effective Date” updated. Material changes affecting active client work trigger a notification to those clients within 15 days.

To raise a concern or seek clarification, e-mail ethics@sixthcityai.com or contact:

Edward Jacak – AI Ethics Officer
Sixth City Technologies, LLC
8954 Albion Road, North Royalton, OH 44133, USA
+1 (216) 220-9261‬