
Earlier this month, OpenAI published the System Card and Preparedness Framework scorecard, assessing the safety of GPT-4o, focused on the “novel risks” of its audio capabilities, while also evaluating its text and vision capabilities. OpenAI deemed GPT-4o “low risk” regarding cybersecurity, biological threats, and model autonomy (they’re not worried that it will become sentient and harm humans directly), but scored “persuasion” as a “borderline medium” risk. This week's newsletter summarizes their risk assessment, then ventures into other ways that Gen AI persuades at scale, within the “influence industrial complex,” and in the form of increasingly charming chatbots. The newsletter also includes links to some interesting stories - from the ridiculous but inventive (5 Guns to Have When AI Becomes Sentient) to the contentious (a post providing excellent short, medium, and long explanations of the latest changes to SB 1047, with smart takes on the plusses/minuses). If you haven't subscribed, please do - that it's free is only its fifth best feature.https://open.substack.com/pub/aixchronicles/p/thats-one-charming-mothering-chatbot?r=3xiea4&utm_campaign=post&utm_medium=web