Nuclear and Biological Attacks – OpenAI to Study ‘Catastrophic’ Risks

<p>The latest endeavor from OpenAI has something of a secret agent ring to it.
Imagine a secret team, code-named "Preparedness," working diligently
behind the scenes, their mission: to save the world from AI catastrophe. And if
the idea of an AI company being worried about the potential disasters caused by
AI doesn't give you the sweats, then you need to sit and have a think.</p><p>Yes, you read that right. The <a href="https://www.financemagnates.com/trending/openai-soars-to-become-third-most-valuablestartup/" target="_blank" rel="follow" data-article-link="true">third most valuable startup in the world</a>, OpenAI is so serious about the potential
risks around AI that it has conjured up this covert squad, and they're <a href="https://openai.com/blog/frontier-risk-and-preparedness">ready to tackle</a>
anything from rogue AI attempting to trick gullible humans (<a href="https://www.urbandictionary.com/define.php?term=Deepfake">deepfakes</a>,
anyone?) to the stuff of sci-fi thrillers including “chemical, biological,
radiological, and nuclear” threats. Yep. Nuclear.</p><p>Prepare for Anything</p><p>The mastermind behind Preparedness, Aleksander Madry, hails from MIT’s
Center for Deployable <a href="https://www.financemagnates.com/terms/m/machine-learning/">Machine Learning</a>. He’s like a real life <a href="https://en.wikipedia.org/wiki/John_Connor">John Connor</a>, albeit
without Arnie. OpenAI's Sam Altman, known for his <a href="https://www.cnbc.com/2023/05/31/ai-poses-human-extinction-risk-sam-altman-and-other-tech-leaders-warn.html">AI
doomsday prophecies</a>, doesn't mess around when it comes to the existential
threats AI might pose. While he's not in the business of fighting cyborgs with
his cigar smoking friend, he's certainly ready to tackle the darker side of AI.</p><blockquote><p lang="en" dir="ltr">We are building a new Preparedness team to evaluate, forecast, and protect against the risks of highly-capable AI—from today's models to AGI.Goal: a quantitative, evidence-based methodology, beyond what is accepted as possible: <a href="https://t.co/8lwtfMR1Iy">https://t.co/8lwtfMR1Iy</a></p>— OpenAI (@OpenAI) <a href="https://twitter.com/OpenAI/status/1717589499533042034?ref_src=twsrc%5Etfw">October 26, 2023</a></blockquote><p>A Contest with Consequences</p><p>In their quest for vigilance, OpenAI's offering a whopping $25,000
prize and a seat at the Preparedness table for the <a href="https://openai.com/form/preparedness-challenge">ten brightest submissions
from the AI community</a>. They're looking for ingenious yet plausible
scenarios of AI misuse that could spell catastrophe. Your mission, should you
choose to accept it: save the world from AI mayhem.</p><p>Undercover Work in the AI Safety Realm</p><p>Preparedness isn't your typical band of heroes. Their role extends
beyond facing villains. They'll also craft an AI safety bible, covering the
ABCs of <a href="https://www.financemagnates.com/terms/r/risk-management/">risk management</a> and prevention. OpenAI knows that the tech they're
cooking up can be a double-edged sword, so they're putting their resources to
work to make sure it stays on the right side.</p><p>Ready for Anything</p><p>The unveiling of Preparedness at a <a href="https://www.gov.uk/government/topical-events/ai-safety-summit-2023">U.K.
government AI safety summit</a> is no coincidence. It's OpenAI's bold
declaration that they're taking AI risks to heart, as they prepare for a future
where AI could be the answer to everything, or a serious problem.</p>

This article was written by Louis Parks at www.financemagnates.com.

Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *