OpenAI & Nvidia Eye $100B Chip Deal to Fuel Next-Gen AI

OpenAI and Nvidia are reportedly planning a landmark $100 billion chip deal aimed at shaping the future of artificial intelligence. This significant investment underscores the increasing competition and demand for specialized data center chips to power advanced AI models. The collaboration is expected to involve OpenAI acquiring Nvidia’s GPU systems, with Nvidia taking a non-controlling equity stake. This strategic partnership aims to significantly scale OpenAI’s computational infrastructure.

Sources:

ERG Deploys AI to Optimize Smelting Plant Safety and Efficiency

Eurasian Resources Group (ERG) has initiated a new project utilizing artificial intelligence and machine learning to enhance the safety and efficiency of its smelting plants. The project, implemented at a ferroalloys plant of its Kazchrome subsidiary, involves integrating AI tools into the existing IT system controlling a smelting furnace. Following a 90-day data analysis period, ERG plans to develop ten AI modules for production management, including tools for predictive diagnostics and smart optimization. The company anticipates this technology adoption will lead to greater production process stability, improved performance, and reduced operating costs.

Sources:

EU Automotive Robotics Installations Decline by 5% in 2024, IFR Reports

The automotive industry in the European Union installed 30,650 industrial robots in 2024, a 5% decrease compared to the previous year, according to the World Robotics 2025 report from the International Federation of Robotics (IFR). Six of the top ten vehicle-producing countries in the EU experienced double-digit declines in robot installations. The IFR does not anticipate the automotive sector to be a growth driver for the robotics industry in 2025, citing lower-than-expected demand for electric vehicles and political uncertainty as reasons for postponed investments. Despite the downturn, Hungary was a notable exception, with a 305% increase in industrial robot installations in its automotive sector due to major new car industry projects.

Sakura Internet Launches ‘Sakura AI Engine’ for Generative AI API Integration

Sakura Internet has launched the “Sakura AI Engine,” an inference API platform designed for generative AI. This new platform, accessible via the “Sakura Cloud” control panel, enables users to integrate large-scale language models (LLMs) and other foundational models into their applications through an API. The service is built on Sakura’s “High Firepower” cloud service and offers access to various domestic and international foundational models. Key features include REST APIs for easier application integration and a search augmentation generation (RAG) function that connects with vector databases to enable chatbots and FAQs using in-house data. The platform utilizes NVIDIA GPU resources for inference processing and is operated entirely within Japan, leveraging domestic data centers. Sakura Internet provides both a “Free Platform Model Plan” and a “Pay-As-You-Go Plan” to accommodate different user needs.

Sources:

Alibaba and OpenAI Unveil Major Advances in Generative AI Models and Infrastructure

Alibaba and OpenAI have both revealed significant developments in their generative AI initiatives. At its Yunqi conference, Alibaba unveiled a $52 billion AI roadmap and its Qwen team released seven new models, including the 1 trillion parameter Qwen3-Max for complex reasoning and the open-source vision model Qwen3-VL-235B-A22B, which can convert screenshots into functional code. Concurrently, OpenAI announced a major expansion of its AI infrastructure, adding five new “Stargate” supercomputer sites. OpenAI has also released GPT-5-Codex, a new model designed for agentic coding workflows, featuring “adaptive reasoning” to allocate more computational power to more difficult tasks.

Sources:

Mirantis Releases MKE 4.1.1 to Simplify Multi-Cloud Kubernetes Management

Mirantis has released Mirantis Kubernetes Engine for k0rdent (MKE 4k) 4.1.1, designed to streamline the management of distributed and complex Kubernetes infrastructures. This new version aims to reduce operational overhead, enhance upgrade predictability, and ensure consistency across multi-cloud and regulated environments. A key feature is its ability to manage an entire multi-cloud, multi-cluster Kubernetes estate from a single control point. The release integrates Mirantis k0rdent Enterprise 1.1.0, allowing for centralized deployment of MKE child clusters to prevent configuration drift. A new “dry-run” capability allows teams to preview configuration changes before production deployment, minimizing operational risks. The update also includes support for custom registries in air-gapped environments and offers greater networking flexibility with support for any CNI-compliant plug-in.

Sources:

HSBC and IBM Announce Quantum Computing Breakthrough for Algorithmic Bond Trading

HSBC, in collaboration with IBM, has announced the first-known empirical evidence of the value of current quantum computers in algorithmic bond trading. The partnership utilized a combination of classical computing and IBM’s Heron quantum processor to improve the prediction of whether a trade would be filled at a quoted price by up to 34% compared to standard classical methods. This development is a significant step towards the near-term application of quantum technology in the financial services industry. The trial focused on optimizing requests in over-the-counter markets. HSBC’s Head of Quantum Technologies, Philip Intallura, referred to the achievement as a “Sputnik moment” for quantum computing in finance.

Flox Secures $25M Series B to Streamline Software Development Environments

Flox, a startup aimed at simplifying the creation of software development environments, has secured $25 million in a Series B funding round. The investment was led by Addition, with participation from NEA, Hetz, Illuminate Financial, and D. E. Shaw. Spun out of investment firm D. E. Shaw in 2021, Flox provides a platform that allows engineers to set up development environments with a single command, reducing a process that can often take hours. The platform encapsulates environment configurations in templates, which can be easily shared and customized among team members, eliminating compatibility issues across different machines.

Sources: