OpenAI hiring for head safety executive to mitigate AI risks

2 hours ago 15

OpenAI is seeking a caller "head of preparedness" to usher the company's information strategy amid mounting concerns implicit however artificial quality tools could beryllium misused.

According to the occupation posting, the caller prosecute volition beryllium paid $555,000 to pb the company's information systems team, which OpenAI says is focused connected ensuring AI models are "responsibly developed and deployed." The caput of preparedness volition besides beryllium tasked with tracking risks and processing mitigation strategies for what OpenAI calls "frontier capabilities that make caller risks of terrible harm."

"This volition beryllium a stressful occupation and you'll leap into the heavy extremity beauteous overmuch immediately," CEO Sam Altman wrote successful an X post describing the presumption implicit the weekend.

He added, "This is simply a captious relation astatine an important time; models are improving rapidly and are present susceptible of galore large things, but they are besides starting to contiguous immoderate existent challenges."

OpenAI did not instantly respond to a petition for comment.

The company's concern successful information efforts comes arsenic scrutiny intensifies implicit artificial intelligence's power connected intelligence health, pursuing multiple allegations that OpenAI's chatbot, ChatGPT, was progressive successful interactions preceding a fig of suicides.

In 1 lawsuit earlier this year covered by CBS News, the parents of a 16-year-old sued the company, alleging that ChatGPT encouraged their lad to program his ain suicide. That prompted OpenAI to denote caller information protocols for users nether 18. 

ChatGPT besides allegedly fueled what a lawsuit filed earlier this month described arsenic the "paranoid delusions" of a 56-year-old antheral who murdered his parent and past killed himself. At the time, OpenAI said it was moving connected improving its exertion to assistance ChatGPT admit and respond to signs of intelligence oregon affectional distress, de-escalate conversations and usher radical toward real-world support.

Beyond intelligence wellness concerns, worries person besides accrued implicit however artificial quality could beryllium utilized to transportation retired cybersecurity attacks. Samantha Vinograd, a CBS News contributor and erstwhile apical Homeland Security authoritative successful the Obama administration, addressed the contented connected CBS News' "Face the Nation with Margaret Brennan" connected Sunday.

"AI doesn't conscionable level the playing tract for definite actors," she said. "It really brings caller players onto the pitch, due to the fact that individuals, non-state actors, person entree to comparatively low-cost exertion that makes antithetic kinds of threats much credible and much effective."

Altman acknowledged the increasing information hazards AI poses successful his X post, penning that portion the models and their capabilities person precocious quickly, challenges person besides started to arise.

"The imaginable interaction of models connected intelligence wellness was thing we saw a preview of successful 2025; we are conscionable present seeing models get truthful bully astatine machine information they are opening to find captious vulnerabilities," helium wrote.

Now, helium continued, "We are entering a satellite wherever we request much nuanced knowing and measurement of however those capabilities could beryllium abused, and however we tin bounds those downsides ... successful a mode that lets america each bask the tremendous benefits."

According to the occupation posting, a qualified applicant would person "deep method expertise successful instrumentality learning, AI safety, evaluations, information oregon adjacent hazard domains" and person acquisition with "designing oregon executing high-rigor evaluations for analyzable method systems," among different qualifications.

OpenAI archetypal announced the instauration of a preparedness squad successful 2023, according to TechCrunch.

Edited by Aimee Picchi

Top nationalist information threats successful 2026

CBS News contributors connected apical nationalist information threats successful 2026 12:21

CBS News contributors connected apical nationalist information threats successful 2026

(12:21)

Read Entire Article