<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Quinton Anderson</title><link>https://quintona.github.io/blog/</link><description>Recent content on Quinton Anderson</description><generator>Hugo</generator><language>en-us</language><atom:link href="https://quintona.github.io/blog/index.xml" rel="self" type="application/rss+xml"/><item><title/><link>https://quintona.github.io/blog/posts/agent_risk_management_overview_blog-0.1/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://quintona.github.io/blog/posts/agent_risk_management_overview_blog-0.1/</guid><description>&lt;h1 id="from-model-risk-to-agent-risk-a-practical-risk-management-approach-for-organisations-rolling-out-agents">From Model Risk to Agent Risk: A Practical Risk Management Approach for Organisations Rolling Out Agents&lt;/h1>
&lt;blockquote>
&lt;p>&lt;strong>Core idea:&lt;/strong> Agent risk management is not about proving a technical system is safe at a point in time. It is about governing &lt;strong>delegated autonomy&lt;/strong> in a &lt;strong>socio-technical system&lt;/strong> so that outcomes stay within policy, performance targets, and risk appetite over time. A socio-technical system with self correcting mechanisms&lt;/p>&lt;/blockquote>
&lt;p>SR 11-7 does not mention hallucination rates. It does not define retrieval precision. It has no section on prompt injection, corpus freshness, or agentic decision boundaries. It was issued in 2011, long before today’s enterprise rollout of large language models and agents.&lt;sup id="fnref:1">&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref">1&lt;/a>&lt;/sup>&lt;/p></description></item></channel></rss>