‘AI agents are becoming a problem’: OpenAI CEO flags risks; offers $555K role

4 months ago 92
ARTICLE AD BOX

 AI models are opening  to find...

OpenAI is hiring a high-paid Head of Preparedness amid increasing fears of AI systems exploiting information flaws and harming intelligence well-being. CEO Sam Altman acknowledged models are uncovering captious vulnerabilities, portion besides highlighting AI's imaginable intelligence impact. The relation volition tackle cybersecurity, biosecurity, and self-improving AI risks, a important measurement arsenic AI's dual quality becomes apparent.

OpenAI is actively recruiting a Head of Preparedness to code mounting concerns astir AI systems discovering captious vulnerabilities and impacting intelligence health, CEO Sam Altman announced connected X. The position, offering $555,000 positive equity, comes arsenic the institution acknowledges its models "are opening to find captious vulnerabilities" successful machine information systems.Altman warned that portion AI models are "now susceptible of galore large things," they're simultaneously presenting "some existent challenges" that necessitate contiguous attention. The admittance marks a important displacement successful however OpenAI publically addresses AI information concerns, peculiarly astir cybersecurity threats and intelligence wellness impacts.

Growing concerns implicit AI-powered cyber threats

The announcement follows caller reports of AI systems being weaponized for cyberattacks.

Last month, rival Anthropic revealed that Chinese state-sponsored hackers manipulated its Claude Code instrumentality to people astir 30 planetary entities, including tech companies, fiscal institutions, and authorities agencies, with minimal quality intervention.According to OpenAI's occupation listing, the Head of Preparedness volition oversee the company's preparedness framework, focusing connected "frontier capabilities that make caller risks of terrible harm."

Key responsibilities see processing capableness evaluations, menace models, and mitigations crossed captious hazard areas including cybersecurity, biosecurity, and self-improving AI systems.

Mental wellness interaction yet acknowledged by OpenAI CEO

Altman specifically highlighted intelligence wellness arsenic a interest aft OpenAI saw "a preview of" AI's imaginable intelligence interaction successful 2025. This acknowledgment comes amid respective high-profile lawsuits alleging ChatGPT's engagement successful teen suicides and reports of AI chatbots feeding users' delusions and conspiracy theories.The relation requires idiosyncratic who tin "help the satellite fig retired however to alteration cybersecurity defenders with cutting borderline capabilities portion ensuring attackers can't usage them for harm," Altman stated, calling it "a stressful job" wherever the prosecute volition "jump into the heavy extremity beauteous overmuch immediately."The presumption became vacant aft aggregate enactment changes successful OpenAI's information teams passim 2024-2025, including the departure of erstwhile Head of Preparedness Aleksander Madry.

Read Entire Article
LEFT SIDEBAR AD

Hidden in mobile, Best for skyscrapers.