HANALEI.DEV PORTFOLIO / Risk Framework

AI Tool Intake & Risk Assessment Template

A structured intake and risk assessment template combining NIST AI RMF with vendor risk best practices. Designed to help organizations evaluate AI tools before deployment, covering risk classification, data handling, and governance accountability.

Document TypeAssessment Template
FrameworkNIST AI RMF + Vendor Risk
UsePre-Deployment Evaluation
Version1.0

How to Use This Template

This template is completed by the business unit requesting a new AI tool, with review sections completed by the AI Governance Lead, Security, and Legal. All fields are required unless marked optional. Incomplete submissions will be returned without review.

Once completed, submit to the AI Governance Lead via the intake portal. You will receive a tracking number within 2 business days and a final decision within 10 business days for standard reviews.

Before You Submit

If the tool involves processing of customer data, employee personal data, health information, or financial records, notify Legal before submitting. These data types require pre-coordination and may extend the review timeline.

Tool & Vendor Information

1 Basic Identification NIST AI RMF: GOVERN 1.7
Tool / Product Name
Include version if applicable
Vendor / Developer
Legal entity name
Vendor Website
Primary product URL
Access Method
How will the organization access the tool?
SaaS / Browser
API Integration
On-Premises
Plugin / Extension
Mobile App
Requesting Department
Business Sponsor
Name and title of approving leader
Estimated Users
Number of employees who will use this tool
Intended Use Case
Describe specifically what this tool will be used for

Data Handling Assessment

2 Data Types & Processing NIST AI RMF: MAP 2.3, GOVERN 1.7
Data Types Involved
Select all that apply
No personal data
Employee PII
Customer PII
Financial records
Health / medical
Proprietary IP
Legal privileged
Regulated data
Does vendor retain inputs?
Will the vendor store prompts, files, or data submitted to the tool?
No retention
Temporary only
Retained
Unknown
Model training on customer data?
Can vendor use submitted data to train or improve their models?
No: confirmed in contract
Opt-out available
Yes
Unknown
Data residency
Where will data be processed and stored?
US only
EU region available
Global / unspecified
On-premises
DPA available?
Has the vendor provided a Data Processing Agreement?
Yes: executed
Available, not yet signed
Not available
Not applicable

Vendor Security Posture

3 Security Certifications & Controls NIST AI RMF: GOVERN 6.1, MAP 3.5
Security certifications held
Select all confirmed with documentation
SOC 2 Type II
ISO 27001
ISO 42001
FedRAMP
HIPAA BAA
CSA STAR
None confirmed
SSO / SAML support
Can access be federated through org identity provider?
Yes: SAML 2.0
Yes: OIDC / OAuth
No
Enterprise tier only
Admin controls available
Can admins provision users, set policies, view audit logs?
Yes: full admin console
Partial controls
No admin controls
Encryption in transit
TLS 1.3
TLS 1.2
Unconfirmed
Encryption at rest
AES-256
Other (specify in notes)
Not confirmed
Breach notification process
Does vendor contract include breach notification SLA?
Yes: 72 hours or less
Yes: timeframe unspecified
No
Unknown
Known security incidents
Any publicly disclosed breaches or incidents in past 3 years?
None identified
Yes: describe in notes
Unknown
Notes
Additional security context, links to documentation, or concerns

AI-Specific Risk Factors

4 Model & Output Risk Assessment NIST AI RMF: MAP 5.1, MEASURE 2.1
AI system type
Large language model
Image / video generation
Classification / prediction
Recommendation engine
Autonomous agent
Other
Autonomy level
Will AI make or influence decisions without human review?
Fully human-reviewed
Human reviews exceptions only
Autonomous with logging
Fully autonomous
Decisions affected
Will outputs influence any of the following?
Employment / hiring
Credit / financial
Customer service
Legal / compliance
Medical / health
Marketing / targeting
None of the above
Affected populations
Who will be affected by the AI system's outputs?
Internal employees only
Customers
Vendors / partners
General public
Vulnerable populations
Vendor model documentation
Has vendor provided model card, bias evaluation, or safety documentation?
Yes: reviewed and attached
Partial documentation
Not available
EU AI Act risk tier
Based on use case and affected population
Minimal
Limited
High
Unacceptable (prohibited)

Governance & Accountability

5 Ownership & Oversight NIST AI RMF: GOVERN 1.1, MANAGE 4.1
Technical Owner
Person responsible for integration, monitoring, and technical issues
Data Owner
Person responsible for data classification and handling decisions
Human review process
Describe how human oversight will be implemented for AI outputs
Monitoring plan
How will the tool's outputs and behavior be monitored after deployment?
Incident reporting
Who will employees contact if they encounter an AI output issue?
Training required?
Will users receive training before accessing this tool?
Yes: required before access
Optional / recommended
Not planned

Risk Scoring & Decision Framework

This section is completed by the AI Governance Lead and reviewing teams. It is not completed by the submitter.

Reviewer Scoring Matrix
Risk Dimension Weight Score 1 (Low) Score 2 (Medium) Score 3 (High) Score Weighted
Data Sensitivity: type and volume of data processed 3x No personal data Internal PII only Customer or regulated data ___ ___
Autonomy Level: degree of human oversight over outputs 3x All outputs reviewed Exceptions reviewed Autonomous decisions ___ ___
Impact Scope: population affected by outputs 2x Internal only Customers / partners Public or vulnerable groups ___ ___
Vendor Security: certifications and security posture 2x SOC 2 + SSO confirmed Partial documentation No certifications confirmed ___ ___
Data Retention: vendor retention and training use 2x No retention, DPA signed Retention with controls Retention unknown or training use ___ ___
Regulatory Exposure: compliance obligations implicated 2x None identified Limited obligations GDPR / EU AI Act / HIPAA etc. ___ ___
Total Weighted Score (max 42) ___ / 42
6–14
Low Risk
Expedited review track. AI Lead approval. Standard controls apply.
15–24
Medium Risk
Full committee review. Security and Legal sign-off required.
25–34
High Risk
Extended review with additional conditions. C-suite notification.
35–42
Critical Risk
Executive approval required. May be denied or require fundamental redesign.
OutcomeConditionsNext Steps
Approved Score below 25, all review criteria met, no unresolved flags Add to registry. Issue approval letter. Configure SSO and access controls. Set review date.
Approved with Conditions Score below 35 with documented risks requiring compensating controls or ongoing monitoring Issue conditional approval with specific requirements. Schedule 90-day check-in. Conditions must be met before full production use.
More Information Needed Incomplete submission, unresolved security findings, or DPA not yet executed Return to submitter with specific requirements. Clock pauses until resubmission. Resubmission deadline: 30 days.
Denied Use case prohibited by policy, score above 35 without mitigation path, or unacceptable data handling practices Issue denial letter with rationale. Tool may not be used for work purposes. Resubmission permitted after 90 days if fundamental issues are resolved.