feat(agents): align RAI planner with guide, remove scoring, improve UX#1287
feat(agents): align RAI planner with guide, remove scoring, improve UX#1287WilliamBerryiii wants to merge 7 commits intomainfrom
Conversation
- restructure Phase 2 into binary trigger assessment with T1/T2/T3 tiers
- replace likelihood-impact scoring with restricted-use gate framework
- adopt AI STRIDE extensions with eight AI element types in Phase 4
- unify threat IDs to T-RAI-{NNN} format across all phases
- add rai-sensitive-uses-triggers.instructions.md for Phase 2 depth
- update collection manifests, plugins, and documentation accuracy
- add Sign-RaiArtifacts.ps1 signing script with Pester tests
Closes #1281
🚀 - Generated by Copilot
Dependency Review✅ No vulnerabilities or license issues or OpenSSF Scorecard issues found.OpenSSF Scorecard
Scanned Files
|
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #1287 +/- ##
==========================================
- Coverage 87.72% 86.94% -0.79%
==========================================
Files 61 63 +2
Lines 9320 9567 +247
==========================================
+ Hits 8176 8318 +142
- Misses 1144 1249 +105
Flags with carried forward coverage won't be shown. Click here to find out more.
🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Phase 5 artifact templates (control-surface-catalog.md, evidence-register.md, rai-tradeoffs.md) have YAML frontmatter but no disclaimer preamble. The plan template correctly includes one. These files persist to disk and may be shared standalone.
Suggested change — Add after frontmatter in all three templates:
Suggested Resolution — Add a footer notice to both templates:
ADO (HTML) — append before closing :
GitHub (Markdown) — append at the end of the issue body template:
Generated by RAI Planner AI assistant. All content is suggestive and requires
validation by qualified professionals before implementation. This is not legal,
compliance, or ethics advice.
There was a problem hiding this comment.
Thank you for flagging the gap in Phase 5 artifact templates. We addressed this by adding an AI-content note to all three templates (Control Surface Catalog, Evidence Register, and RAI Tradeoffs).
We classified Control Surface Catalog and Evidence Register as agentic artifacts — they're consumed by later pipeline phases rather than read standalone by stakeholders — so they receive only the AI-content note. RAI Tradeoffs is human-facing and includes both the note and a human review checkbox.
The footer classification is documented in the new Artifact Attribution and Review section of the handoff pipeline docs.
There was a problem hiding this comment.
The HTML template for ADO work items and the markdown template for GitHub issues contain structured fields (Context, RAI Principle, Threat, Control Surface, Acceptance Criteria) but no indication that the content was AI-generated. Once these work items land in ADO or GitHub, they become standalone artifacts completely disconnected from the RAI session.
Developers, PMs, or compliance reviewers encountering these items in their backlog would not know:
The content was generated by an AI tool
The priorities, acceptance criteria, and remediation horizons are suggestions requiring validation
The items should not be treated as authoritative compliance directives
While the templates use "Suggested" prefixes on priority and horizon fields (good), the body text describing controls and acceptance criteria reads as authoritative statements.
Current Code (ADO HTML template):
Suggested Resolution — Add a footer notice to both templates:
ADO (HTML) — append before closing :
GitHub (Markdown) — append at the end of the issue body template:
There was a problem hiding this comment.
Good catch on the standalone work item gap. Both the ADO (HTML) and GitHub (Markdown) work item templates now include the AI-content note and a human review checkbox. The ADO template uses HTML formatting to match the surrounding template structure; the GitHub template uses blockquote markdown.
These are classified as human-facing artifacts since they land in backlogs where developers and compliance reviewers encounter them independently of the RAI session.
There was a problem hiding this comment.
Handoff Summary Format section
The handoff summary contains work item counts, priority breakdowns, remediation horizons, cross-references, and a "Suggested Review Status" field. A stakeholder receiving this document without the session context could interpret the structured tables and status designations as authoritative assessments rather than AI-generated suggestions.
Suggested Resolution — Insert a disclaimer blockquote between the header metadata and the work item summary:
RAI Backlog Handoff Summary
System: {system-name}
Date: {YYYY-MM-DD}
Suggested Review Status: {Ready for stakeholder review / Additional attention suggested / Significant areas need further consideration}
This handoff was prepared by an AI assistant to support responsible AI planning.
All items are suggestions for human evaluation and do not constitute legal,
compliance, or ethics advice. Organizational RAI policies and applicable
regulations take precedence.
Work Item Summary
There was a problem hiding this comment.
Agreed — the handoff summary is the most stakeholder-facing deliverable in the pipeline. We gave it the full three-tier treatment: AI-content note, human review checkbox, and the complete verbatim disclaimer. This is the strongest attribution level in the system, applied only to the Handoff Summary and Compact Handoff Summary.
We also removed the older qualifier prose from the RAI Review Summary template since the new AI-content note supersedes it.
There was a problem hiding this comment.
Handoff Summary Format section
The review summary has a disclaimer but the separate Handoff Summary — the most likely stakeholder-facing deliverable — does not.
Suggested change — Insert between header metadata and Work Item Summary:
> This handoff was prepared by an AI assistant to support responsible AI planning.
> All items are suggestions for human evaluation and do not constitute legal,
> compliance, or ethics advice. Organizational RAI policies and applicable
> regulations take precedence.
There was a problem hiding this comment.
The compact handoff summary template in rai-identity.instructions.md already had the disclaimer applied from earlier work on this branch. We verified it includes the full three-tier treatment (AI-content note, human review checkbox, and complete disclaimer), consistent with the handoff summary in rai-backlog-handoff.instructions.md. No additional changes were needed here.
There was a problem hiding this comment.
Disclaimer and Attribution Protocol section
For long-running sessions, consider adding a brief reminder at hard gate checkpoints (Phases 2, 3, and 6): "Reminder: All findings are suggestions for qualified professional review." Current coverage is solid; this would reinforce it during extended interactions.
There was a problem hiding this comment.
This is baked into all the "knows" exit points from the system, but I'll add it at the gate checks.
There was a problem hiding this comment.
Actually, I may make this a mandatory step in all multi-phase/protocol-based workflows in the repo, regardless. and validate it presence in CI
There was a problem hiding this comment.
I'm actually gonna skip the gate checks. We present at entrance, exit which is more than sufficient given that we rely/delegate down to the hosting platform VSCode/GHCP. There is only a single user driving the session, because GHCP requires per user authentication, and over reminding them of the disclaim has typically been deemed to be excessive in most instances (e.g. EULA, etc.) ... typically products and product experiences require this only on first agreement (with a specific user) and again on term changes of the underlying agreement.
There was a problem hiding this comment.
Per the discussion in this thread, we're not adding gate checkpoint reminders. The existing disclaimer coverage at session start, exit points, and session resumption provides sufficient reinforcement. As noted above, over-reminding within a single authenticated user session tends to be excessive — products and experiences typically require agreement at first use and again on term changes rather than at every internal checkpoint.
raymond-nassar
left a comment
There was a problem hiding this comment.
First impressions on this draft PR:
The three-part disclaimer formulation exceeds the baseline requirement: (1) covers legal, compliance, and ethics; (2) frames all outputs as suggestions; (3) establishes organizational policy precedence.
Replacing numerical scores with qualitative concern levels and maturity indicators strengthens the disclaimer posture. Qualitative assessments naturally invite human judgment.
Look forward to seeing this in action.
raymond-nassar
left a comment
There was a problem hiding this comment.
Files affected: All files containing the current disclaimer blockquote:
.github/agents/rai-planning/rai-planner.agent.md
.github/instructions/rai-planning/rai-identity.instructions.md (Session Start Display, Exit Point Reminder, State Creation)
.github/prompts/rai-planning/rai-capture.prompt.md
.github/prompts/rai-planning/rai-plan-from-prd.prompt.md
.github/prompts/rai-planning/rai-plan-from-security-plan.prompt.md
Current text:
This tool provides structured prompts and frameworks to support responsible AI planning. It is not a substitute for professional legal, compliance, or ethics review. All outputs are suggestions for human evaluation. Organizational RAI policies and applicable regulations take precedence.
Required text:
This agent is an assistive tool only. It does not provide legal, regulatory, or compliance advice and does not replace Responsible AI review boards, ethics committees, legal counsel, compliance teams, or other qualified human reviewers. The output consists of suggested actions and considerations to support a user's own internal review and decision‑making. All RAI assessments, sensitive use screenings, security models, and mitigation recommendations generated by this tool must be independently reviewed and validated by appropriate legal and compliance reviewers before use. Outputs from this tool do not constitute legal approval, compliance certification, or regulatory sign‑off.
Rationale:
The current disclaimer is too general and does not adequately protect against misinterpretation. The replacement version:
- Enumerates specific professional roles the tool does not replace (RAI review boards, ethics committees, legal counsel, compliance teams)
- Names the specific output types requiring validation (RAI assessments, sensitive use screenings, security models, mitigation recommendations)
- Explicitly excludes certification and sign-off — preventing misinterpretation as approval
- States a mandatory validation requirement ("must be independently reviewed and validated") rather than a softer suggestion
- Addresses regulatory advice directly ("does not provide legal, regulatory, or compliance advice")
- This change should be applied everywhere the current disclaimer blockquote appears, including the Session Start Display and Exit Point Reminder sections in rai-identity.instructions.md which govern when the disclaimer is shown to users.
- update disclaimer in rai-planner agent, three RAI prompt files, and rai-identity instructions - regenerate plugin outputs and reformat doc tables ⚖️ - Generated by Copilot
This has been resolved. |
|
The disclaimer text was strengthened across all existing locations in Given that the new verbatim text explicitly states "Outputs from this tool do not constitute legal approval, compliance certification, or regulatory sign‑off", the handoff document is arguably the most critical place for this disclaimer to appear, since it's the artifact most likely to be shared outside the immediate user session with review boards, legal, or leadership. Suggested change: add between the handoff header metadata and the Work Item Summary. |
Agree and I am addressing this now; but I am still trying to figure out how to do this tastefully. My main area of consideration here is that the handoff artifacts, in their "intended use" are to be persisted locally (not saying they WILL be, but as HVE Core is currently designed, the "intended use" is user specific local storage). Local artifacts, just like GHCP generating code or documentation generation are intended for human review. As currently designed the agent receivers of these handoff artifacts (the backlog agents) must be operationalized by the user explicitly, the artifacts passed to them, and they are backed by 3 operational modalities:
All three operational modalities have multi-step human interaction requirements, and there is no masked workflows happening. The human is identifying and selecting a specific agent for execution, and any delegation (only available to subagents), only has the bounded context of the original agent's human request. I think at the end of the day, the thing we really care about here from a system/agent operations perspective is to ensure that a human has reviewed the artifacts that are intended for other humans to consume. If we can agree on that, then I think the most important thing about AI generation attribution is to provide, as part of the disclaim footer, a markdown checkbox that offers, "has" or "has not" been reviewed by a human, with a default selection of "has not". This sends a much stronger signal, of attestation of review ... which is really what we are trying to set the behavioral outcome for. Consuming agents can also gate on this field ... as an added benefit. |
…system - remove human review checkbox from agentic artifacts (Control Surface Catalog, Evidence Register) in impact assessment instructions - remove superseded qualifier prose from RAI Review Summary template in backlog handoff instructions - add Artifact Attribution and Review section to handoff pipeline docs with footer classification table - add footer classification notes to Phase 5 and Phase 6 outputs in phase reference docs - add conversational vs persisted disclaimer note to agent overview docs 📝 - Generated by Copilot
Artifact Attribution and Review — Summary of ChangesThank you for the thorough review, @raymond-nassar. We implemented a two-tier attribution system across the RAI planning artifact templates and added documentation to support the new conventions. Tier 1: AI-Content NoteAll persisted artifacts now include an AI-content note adapted from the Microsoft Learn per-article pattern:
Tier 2: Full DisclaimerThe handoff summary — the primary stakeholder-facing deliverable — includes the complete verbatim disclaimer after the AI-content note and human review checkbox. Human Review CheckboxHuman-facing artifacts include a review checkbox to track validation status:
Artifact Classification
Control Surface Catalog and Evidence Register are classified as agentic artifacts (consumed by later pipeline phases) and receive only the AI-content note. All other artifacts are human-facing and include the review checkbox. Files ChangedInstruction files:
Documentation:
|
- add centralized config files for disclaimers and footers with human review - add JSON schema for config file validation - add Validate-AIArtifacts.ps1 with scope-filtered artifact classification - add Pester tests (20 passing) for all validation functions - add ai-artifact-validation.yml reusable workflow gated in pr-validation - add lint:ai-artifacts and validate:ai-artifacts npm scripts 🔧 - Generated by Copilot
AI Artifact Validation Infrastructure — Update SummaryThis latest commit ( What was added
Expansion plansThe current scope filter targets
The goal is for each focus area to opt in by adding its scope patterns to the config files — keeping validation centralized while allowing incremental adoption across the codebase. 🔧 - Generated by Copilot |
Correction to previous commentThe scope in Updated scope: Updated expansion plan: When other focus areas (e.g., Validation results after fix: 2 files / 7 issues (down from 3 files / 9 issues), all within 🔧 - Generated by Copilot |
…iling newline Security instruction files were not modified in this PR. Remove .github/instructions/security/** scope patterns from footer-with-review.yml to avoid false positive validation warnings. Fix missing trailing newline in package.json. 🔧 - Generated by Copilot
- switch Find-ArtifactReferences from content-based to filename-based matching - update Pester tests for filename-based artifact matching logic - fix BOM encoding and empty catch blocks for PSScriptAnalyzer - update Docusaurus collection card counts for rai-planning - auto-fix markdown table formatting in handoff-pipeline 🔧 - Generated by Copilot
Description
Implements Issue #1281: RAI Planner Updates — Guide Alignment, Scoring Removal, and UX Improvements.
The RAI Planner agent expands from a 5-phase to a 6-phase workflow, replacing numeric scoring with qualitative assessment, aligning terminology and structure with the Microsoft Responsible AI Impact Assessment Guide, and introducing artifact signing infrastructure. Additionally, this PR establishes a config-driven AI artifact validation pipeline for enforcing footer and disclaimer standards across instruction files. 46 files changed across agent definitions, instructions, prompts, documentation, plugins, collection metadata, config, CI workflows, and scripts.
Key Changes
Basic/Standard/Comprehensive). Subsequent phases renumbered.Low/Moderate/High), a review quality checklist, maturity indicators, and audience adaptation profiles. Renamedrai-scorecard.md→rai-review-summary.md.T-RAI-{NNN}.Sign-RaiArtifacts.ps1for SHA-256 manifest generation with optional Sigstore cosign keyless signing, backed by 262-line Pester test suite. Installed cosign v3.0.5 in devcontainer..github/config/disclaimers.yml,.github/config/footer-with-review.yml) defining footer text, human review checkboxes, and tiered artifact classification with scope-aware glob patterns. CreatedValidate-AIArtifacts.ps1(612 lines) for CI enforcement, a JSON Schema for config validation, a reusable GitHub Actions workflow (ai-artifact-validation.yml), and a 20-test Pester suite. Wired intopr-validation.ymland thelint:allnpm chain.Related Issue(s)
Closes #1281
Type of Change
Select all that apply:
Code & Documentation:
Infrastructure & Configuration:
AI Artifacts:
prompt-builderagent and addressed all feedback.github/instructions/*.instructions.md).github/prompts/*.prompt.md).github/agents/*.agent.md).github/skills/*/SKILL.md)> Note for AI Artifact Contributors:
>
> * Agents: Research, indexing/referencing other project (using standard VS Code GitHub Copilot/MCP tools), planning, and general implementation agents likely already exist. Review
.github/agents/before creating new ones.> * Skills: Must include both bash and PowerShell scripts. See Skills.
> * Model Versions: Only contributions targeting the latest Anthropic and OpenAI models will be accepted. Older model versions (e.g., GPT-3.5, Claude 3) will be rejected.
> * See Agents Not Accepted and Model Version Requirements.
Other:
.ps1,.sh,.py)Sample Prompts (for AI Artifact Contributions)
User Request:
Invoke
RAI Plannerin the VS Code chat pane and use one of three entry prompts:/rai-capture— Start a new conversational RAI assessment from scratch/rai-plan-from-prd— Generate an RAI plan from an existing PRD/rai-plan-from-security-plan— Generate an RAI plan from an existing security planExecution Flow:
T-RAI-{NNN}threats.rai-review-summary.md, dual-format backlog (ADO + GitHub), and optional signed artifact manifest.Output Artifacts:
.copilot-tracking/rai-plans/{session}/state.json— Session state with phase progression.copilot-tracking/rai-plans/{session}/rai-review-summary.md— Qualitative review summary (replaces scored scorecard).copilot-tracking/rai-plans/{session}/rai-backlog-*.md— Dual-format work item backlog.copilot-tracking/rai-plans/{session}/rai-manifest.json— SHA-256 artifact manifest (optional signing)Success Indicators:
rai-review-summary.mdinstead ofrai-scorecard.mdT-RAI-{NNN}format consistentlyTesting
npm run plugin:generate) — 14 pluginsnpm run lint:ai-artifacts> Note: Manual testing was performed along side automated validation and sandbox evaluation as the primary verification methods.
GHCP Artifact Maturity
> [!WARNING]
> This PR includes experimental GHCP artifacts that may have breaking changes.
.github/agents/rai-planning/rai-planner.agent.md.github/prompts/rai-planning/rai-capture.prompt.md.github/prompts/rai-planning/rai-plan-from-prd.prompt.md.github/prompts/rai-planning/rai-plan-from-security-plan.prompt.md.github/instructions/rai-planning/rai-identity.instructions.md.github/instructions/rai-planning/rai-standards.instructions.md.github/instructions/rai-planning/rai-security-model.instructions.md.github/instructions/rai-planning/rai-impact-assessment.instructions.md.github/instructions/rai-planning/rai-backlog-handoff.instructions.md.github/instructions/rai-planning/rai-capture-coaching.instructions.md.github/instructions/rai-planning/rai-sensitive-uses-triggers.instructions.mdGHCP Maturity Acknowledgment
Checklist
Required Checks
AI Artifact Contributions
/prompt-analyzeto review contributionprompt-builderreviewRequired Automated Checks
The following validation commands must pass before merging:
npm run lint:mdnpm run spell-checknpm run lint:frontmatternpm run validate:skillsnpm run lint:md-linksnpm run lint:psnpm run lint:ai-artifacts> Repository template used:
.github/PULL_REQUEST_TEMPLATE.md