Prompt Cards

Incident Response Plan for AI Failures
Create an incident response plan specific to AI: detection, containment, user comms, rollback, forensic logging, and post-incident retraining rules. Include severity levels and example incidents.
Tags: incident-response, rollback, postmortem, ops, safety
Author: Assistant
Created at: 2026-02-02 00:00:00
Average Rating:
Total Ratings:
Blue Team Monitoring: Signals and Alerts
Define monitoring signals: policy violations, anomaly detection, tool misuse attempts, unusual output distributions, and drift. Provide alert thresholds, runbooks, and an on-call playbook.
Tags: monitoring, alerts, drift, ops, runbooks, safety
Author: Assistant
Created at: 2026-02-02 00:00:00
Average Rating:
Total Ratings:
Red Team Program for Recursive Systems
Design a continuous red team program: scenarios, cadence, severity scoring, triage workflow, and how findings feed back into the improvement loop. Include a template for red-team reports.
Tags: red-teaming, security, adversarial-testing, governance, safety
Author: Assistant
Created at: 2026-02-02 00:00:00
Average Rating:
Total Ratings:
Safety Regression Suite (What Must Never Break)
Create a safety regression suite: prompt injection tests, data leakage tests, refusal/guardrail tests, and policy adherence checks. Include how to maintain and evolve the suite over time.
Tags: safety-regression, testing, prompt-injection, privacy, guardrails
Author: Assistant
Created at: 2026-02-02 00:00:00
Average Rating:
Total Ratings:
Offline Sandbox for Iteration (Containment)
Design an offline sandbox environment for experimenting with improvements: isolated data, limited tools, no external side effects, and deterministic replay. Provide a checklist for containment.
Tags: sandbox, containment, offline-testing, security, safety
Author: Assistant
Created at: 2026-02-02 00:00:00
Average Rating:
Total Ratings:
Evaluation Ladder: Unit→Integration→System→Live
Design an evaluation ladder for recursive improvement: unit tests, integration tests, simulation, canaries, and production monitoring. Provide pass/fail gates and minimum coverage targets.
Tags: evaluation, testing, canary, monitoring, quality, safety
Author: Assistant
Created at: 2026-02-02 00:00:00
Average Rating:
Total Ratings:
Automation Boundaries: Action vs Advice vs Draft
Create a boundary model that separates: advisory outputs, drafts, and automated actions. Include criteria for graduating features from advice→action, and safety evidence required.
Tags: automation, boundaries, graduation, governance, safety
Author: Assistant
Created at: 2026-02-02 00:00:00
Average Rating:
Total Ratings:
Human-in-the-Loop Design: When Humans Must Decide
Define where humans must remain in the loop: high-impact actions, security-sensitive steps, and ambiguous decisions. Provide a decision taxonomy and UI/ops requirements for approval workflows.
Tags: human-in-the-loop, governance, approvals, risk, ops
Author: Assistant
Created at: 2026-02-02 00:00:00
Average Rating:
Total Ratings:
Alignment-Style Spec: Behavioral Contracts
Write a behavioral contract for the system: allowed actions, forbidden actions, escalation rules, and acceptable uncertainty. Include examples and counterexamples to reduce ambiguity.
Tags: alignment, specification, behavioral-contract, safety, policy
Author: Assistant
Created at: 2026-02-02 00:00:00
Average Rating:
Total Ratings:
Change Control for AI Systems (RFC Process)
Create an RFC-style change control process tailored to recursive AI: what must be documented, reviewers, rollout plan, rollback triggers, and postmortem requirements. Provide a reusable RFC template.
Tags: change-control, RFC, governance, rollout, rollback, safety
Author: Assistant
Created at: 2026-02-02 00:00:00
Average Rating:
Total Ratings:
Safety-Gated Iteration Loop (Design Pattern)
Design a safety-gated iteration loop: propose→simulate→test→review→deploy→monitor. Include stage gates, required evidence at each gate, and ‘stop conditions’ that automatically halt rollout.
Tags: iteration, stage-gates, monitoring, recursive-ai, safety
Author: Assistant
Created at: 2026-02-02 00:00:00
Average Rating:
Total Ratings:
Threat Modeling: Misuse + Model Failure Modes
Perform a structured threat model (STRIDE-style or similar) for a recursive AI pipeline. Cover misuse, data exfiltration, prompt injection, model drift, and over-automation. Output mitigations and test cases.
Tags: threat-model, security, prompt-injection, recursive-ai, safety
Author: Assistant
Created at: 2026-02-02 00:00:00
Average Rating:
Total Ratings:

Curio AI Brain

Available in Chrome Web Store!