HANALEI.DEV PORTFOLIO / Policy Document

AI Agent Access Control Policy

Document Type Organizational Policy
Framework Alignment NIST AI RMF, Zero Trust
Audience IT, Security, AI Teams
Version 1.0

This policy establishes access control requirements for AI agents operating within organizational systems. It defines the minimum standards for authentication, authorization, and privilege management for autonomous and semi-autonomous AI systems.

Policy Owner CISO / AI Governance Lead
Effective Date Upon Adoption
Review Cycle Annual / Post-Incident
Classification Internal Use

1.0 Policy Statement

AI agents, including autonomous software systems that act on behalf of users or the organization, must be subject to the same identity and access management (IAM) controls applied to human users and traditional software systems. All AI agent access to organizational resources shall follow the principle of least privilege, be time-bounded where feasible, and be fully auditable.

Core Principle

An AI agent is not a trusted insider by default. It is a software principal with defined permissions, an auditable identity, and revocable access. Treat AI agent credentials with the same rigor as privileged human accounts.

2.0 Scope

This policy applies to all AI agents deployed by or on behalf of the organization, including but not limited to:

This policy does not apply to read-only AI assistants with no system access, or to AI tools operating entirely outside organizational networks and data environments.

3.0 AI Agent Access Tiers

All AI agents operating within organizational systems must be classified at the time of intake using the following access tier framework. Classification determines the required controls, review frequency, and oversight requirements.

Tier Classification Permitted Access Prohibited Oversight
Tier 0 Read-only / No system access Public data, user-supplied input No internal system access No formal review required
Tier 1 Low Privilege Internal read-only APIs, approved data lakes, user calendar with explicit consent Write access to production systems Annual review, IT approval
Tier 2 Elevated Privilege Write access to non-critical systems, API calls with rate limits, delegated email with logging Financial transactions, HR data, infrastructure changes without human approval Quarterly review, CISO + AI Lead
Tier 3 High Privilege / Restricted Permitted only under explicit human-in-the-loop authorization for each action sequence Unsupervised autonomous action in critical systems Continuous monitoring, board notification required

4.0 Required Access Controls

All AI agents classified at Tier 1 or above must implement the following controls prior to production deployment. Controls must be documented in the AI agent's system record.

4.1 Identity and Authentication

4.2 Authorization and Least Privilege

4.3 Audit and Logging

Session Management

AI agent sessions must time out after a defined inactivity period. Tier 2 agents: 4 hours. Tier 3 agents: 1 hour or per-task, whichever is shorter.

Scope Creep Prevention

AI agents may not request additional permissions at runtime. Any permission expansion requires a new access request and approval cycle.

Emergency Revocation

A defined process must exist to revoke all AI agent credentials within 15 minutes of an incident declaration, without requiring access to the agent itself.

Third-Party AI Tools

Vendor AI tools requesting OAuth or API access must pass vendor risk assessment before credentials are issued. Access must be scoped to the minimum required by the product.

5.0 Prohibited Configurations

The following configurations are prohibited for all AI agents regardless of tier, business justification, or vendor claims:

6.0 Policy Exceptions

Exceptions to this policy must be submitted to the AI Governance Lead and CISO for joint review. All approved exceptions must include: a documented risk acceptance statement, a defined remediation timeline, and compensating controls. Exceptions are valid for a maximum of 90 days and may not be renewed more than once without executive escalation.