HANALEI.DEV PORTFOLIO / AI Governance Playbook

AI Acceptable Use Policy

A governance policy for a 500-person organization covering approved tools, data handling rules, human review standards, incident response, and procurement approval.

Document TypeOrganizational Policy
Organization Size~500 Employees
Framework AlignmentNIST AI RMF, OECD AI
Version1.0

Purpose mamp; Scope

This policy governs the use of artificial intelligence tools and systems by all employees, contractors, and vendors operating on behalf of the organization. It establishes minimum standards for responsible AI use, defines approval requirements, and protects employees, customers, and the organization from foreseeable AI-related harms.

This policy applies to all AI tools, whether procured by the organization, accessed through personal accounts for work purposes, or embedded in existing software platforms.

Core Principle

AI tools augment human judgment. No AI system may make final decisions about employment, compensation, credit, legal matters, or individual assessment without a documented human review step.

Approved Tools

Employees may only use AI tools that appear on the organizational approved tools registry. Tools are classified by use tier. Using unapproved tools for work tasks is a policy violation.

ToolCategoryApproved Use CasesStatus
Microsoft Copilot (M365) Productivity / Writing Drafting, summarizing internal docs, meeting notes, email assistance Approved
GitHub Copilot Code Assistance Code suggestions, documentation, test generation for non-sensitive repos Approved
Claude (Anthropic) Research / Analysis Research synthesis, policy drafting, non-sensitive analysis tasks Approved
ChatGPT (OpenAI) General LLM Non-sensitive drafting and research; no customer data, no PII Restricted
Grammarly Writing Assistant Grammar and tone assistance for external communications; no confidential content Restricted
Unapproved LLMs Any Not permitted for any work purpose until intake and approval process is completed Prohibited

The AI Governance Lead maintains the approved tools registry. Employees may request new tool reviews through the procurement approval flow (Section 5).

Sensitive Data Rules

The following data categories must never be entered into any AI tool, approved or otherwise, unless the tool has been specifically reviewed and approved for that data type with appropriate contractual protections in place.

Data CategoryExamplesRule
Personally Identifiable Information (PII) Names, SSNs, DOB, addresses, employee IDs Prohibited in all AI tools unless tool is specifically approved for PII processing with DPA in place
Customer Data Account details, transaction records, contact information Prohibited. Customer data may not be used as AI input without explicit contractual authorization
Financial Data Revenue figures, forecasts, Mmamp;A details, unreported earnings Prohibited in external AI tools. Internal approved tools only with Finance approval
Legal Matter Details Litigation, regulatory inquiries, privileged communications Prohibited. Legal holds attorney-client privilege considerations. No AI processing without Legal review
Employee Performance Data Reviews, compensation, disciplinary records, medical Prohibited. HR owns this data classification. No AI processing of employee records without CHRO approval
Proprietary Source Code Unreleased product code, security configurations Engineering may use approved code assistants only for non-sensitive repos. Core IP requires CTO approval
When in Doubt

If you are unsure whether data is appropriate to enter into an AI tool, do not enter it. Submit a question to the AI Governance Lead before proceeding. Err on the side of caution.

Human Review Standards

AI-generated outputs must not be used as final work product in the following contexts without documented human review. "Human review" means a qualified person has read, assessed, and taken responsibility for the output before it is acted upon or distributed.

Incident Response

An AI incident is any event in which an AI tool produces output that causes or risks causing harm to a person, the organization, or a third party. This includes: harmful content, discriminatory output, data exposure, security compromise, or significant factual error acted upon in a business decision.

StepActionOwnerTimeline
1 Stop use of the AI tool involved. Preserve the input and output that caused the incident. Employee who identified incident Immediately
2 Report to direct manager and submit incident report via the AI incident form (linked in the intranet). Employee + Manager Within 2 hours
3 AI Governance Lead and CISO assess severity. Classify as Low / Medium / High / Critical. AI Governance Lead, CISO Within 4 hours
4 For Medium and above: suspend tool organization-wide pending review. Notify Legal if data exposure or regulatory risk is present. CISO, Legal Within 8 hours
5 Conduct root cause analysis. Document findings and remediation steps. Update tool registry and policy as needed. AI Governance Lead Within 5 business days
6 Brief executive leadership on Critical incidents. Board notification for incidents with legal, regulatory, or reputational exposure. CISO, General Counsel Within 24 hours of Critical classification

Procurement Approval Flow

All AI tools must complete this approval process before organizational use. No exceptions. Business units may not use AI tools acquired through personal or departmental spend to circumvent this process.

01
Submit Request
Business unit submits AI Tool Intake Form with use case, vendor, data types involved, and proposed users
02
Risk Classification
AI Governance Lead assigns risk tier within 3 business days. Low-risk tools proceed to abbreviated review
03
Security Review
CISO team reviews vendor security posture, data handling, API access, and authentication requirements
04
Legal Review
Legal reviews vendor contract, data processing agreement, IP ownership, and training data provisions
05
Approval mamp; Registry
AI Governance Lead issues approval or denial. Approved tools added to registry with conditions and review date

Standard review timeline is 10 business days. High-risk tools may require up to 30 days. Expedited reviews require VP-level sponsorship and are not guaranteed.