OpenAI's Model Specification framework aims to balance safety, user freedom, and accountability in AI development, ensuring responsible and transparent AI systems.
OpenAI's Model Specification framework aims to balance safety, user freedom, and accountability in AI development, ensuring responsible and transparent AI systems.
OpenAI has introduced a Safety Bug Bounty program to engage the public in identifying potential AI vulnerabilities and safety risks. This initiative aims to enhance AI security by addressing issues such as agentic vulnerabilities and data exfiltration.
ChatGPT has launched a new feature to enhance online shopping with visually immersive product discovery and merchant integration.
The OpenAI Foundation plans to invest $1 billion in initiatives targeting disease cures, economic growth, AI resilience, and community support.
The Sora platform prioritizes user safety with advanced protective measures, ensuring a secure environment for creative expression.
OpenAI utilizes chain-of-thought monitoring to evaluate the alignment of its internal coding agents, focusing on real-world applications to identify risks and enhance AI safety measures.
OpenAI has announced its acquisition of Astral, aiming to enhance the capabilities of its Codex AI system for Python developers. This move is expected to accelerate the growth of next-generation programming tools.
The new GPT-5.4 mini and nano models are smaller and faster versions of GPT-5.4, optimized for coding, tool use, and high-volume workloads. These advancements aim to enhance performance and efficiency in various applications.
Codex Security has chosen not to use traditional SAST tools, opting instead for AI-driven methods to enhance vulnerability detection and reduce false positives.
Google has secured a licensing agreement with AI coding startup Windsurf, ending OpenAI’s attempt to acquire the company. The deal allows Windsurf to continue operating independently, while several team members






