GCP Release Notes: December 11, 2025

GCP Release Notes: December 11, 2025

AlloyDB for PostgreSQL

Feature

AlloyDB now supports the C4 machine series, which are powered by 6th generation Intel Xeon Granite Rapids processors. These instances offer massive machine sizes, with up to 288 vCPU and 2232 GiB RAM, that let you run extremely demanding workloads. For more information, see Choose an AlloyDB machine type. This feature is generally available (GA).

Apigee API hub

Feature

Model Context Protocol (MCP) support in API hub

API hub now supports the Model Context Protocol (MCP) as a first-class API style. This enables you to ingest, register, and manage MCP APIs and their associated tools.

Key capabilities include:

  • MCP API registration: Register MCP APIs manually or via API hub APIs to create a single registry for your agentic services.
  • MCP tools: Attach MCP specification files to your APIs. API hub parses these files to automatically extract and display the MCP tools in the UI.

For more information, see API resources overview, Register MCP APIs, and Manage MCP tools.

Cloud Monitoring

Feature

You can now add a widget to a dashboard that lets you manage the settings for a variable. To learn more, see the following documents:

Feature

The Google Cloud CLI (gcloud) commands to manage Cloud Monitoring alerting policies are now generally available. For more information, see gcloud monitoring policies.

Cloud VPN

Feature

Cloud VPN provides predefined dashboards in the Google Cloud console for a quick, single-view insight into system health and tunnel performance. These dashboards display key metrics that enable you to monitor project-wide health and conduct tunnel-specific diagnosis without manual configuration. This feature is Generally Available.

For more information, see View Monitoring dashboards

Container Optimized OS

Changed

cos-117-18613-439-65

Kernel Docker Containerd GPU Drivers
COS-6.6.111 v24.0.9 v1.7.28 See List

Security

Fixed CVE-2025-38678 in the Linux kernel.

Security

Fixed CVE-2025-40272 in the Linux kernel.

Security

Fixed CVE-2025-40104 in the Linux kernel.

Security

Fixed CVE-2025-47914 and CVE-2025-58181 in dev-go/crypto.

Feature

Added patches to handle IDPF tx timeouts.

Security

Fixed CVE-2025-40248 in the Linux kernel.

Security

Fixed CVE-2025-40320 in the Linux kernel.

Fixed

Upgraded app-admin/google-guest-configs to v20251014.00.

Security

Fixed CVE-2025-40273 in the Linux kernel.

Security

Fixed CVE-2025-40271 in the Linux kernel.

Security

Fixed CVE-2025-40319 in the Linux kernel.

Security

Fixed CVE-2025-40251 in the Linux kernel.

Security

Fixed CVE-2025-40231 in the Linux kernel.

Security

Fixed CVE-2025-21868 in the Linux kernel.

Security

Fixed CVE-2025-40250 in the Linux kernel.

Changed

Runtime sysctl changes:

  • Changed: fs.file-max: 811788 -> 811701

Security

Fixed CVE-2025-40220 in the Linux kernel.

Security

Fixed CVE-2025-22103 in the Linux kernel.

Security

Fixed CVE-2025-40324 in the Linux kernel.

Security

Fixed CVE-2025-40293 in the Linux kernel.

Security

Fixed CVE-2025-40297 in the Linux kernel.

Security

Fixed CVE-2025-40292 in the Linux kernel.

Security

Fixed CVE-2025-40268 in the Linux kernel.

Security

Fixed CVE-2025-40256 in the Linux kernel.

Security

Fixed CVE-2025-38057 in the Linux kernel.

Google Distributed Cloud (software only) for VMware

Fixed

The following issues were fixed in 1.34.0-gke.566:

Google Distributed Cloud (software only) for bare metal

Announcement

Google Distributed Cloud for bare metal 1.34.0-gke.566 is now available for download. To upgrade, see Upgrade clusters. Distributed Cloud for bare metal 1.34.0-gke.566 runs on Kubernetes v1.34.1-gke.2900.

After a release, it takes approximately 7 to 14 days for the version to become available for installations or upgrades with the GKE On-Prem API clients: the Google Cloud console, the gcloud CLI, and Terraform.

If you use a third-party storage vendor, check the Ready storage partners document to make sure the storage vendor has already passed the qualification for this release of Distributed Cloud for bare metal.

Feature

The following features were added in 1.34.0-gke.566:

  • Preview: Vertical Pod autoscaling can now be configured to use your Prometheus instance as a persistent history provider for long-term CPU and memory usage data.

  • Preview: Added support for horizontal Pod autoscaling that uses custom metrics from your Prometheus server to scale your applications, eliminating the operational burden of manually deploying and managing the adapter, its configuration, and RBAC. The automated solution handles the entire lifecycle, making it simpler to scale applications based on the metrics that matter most to your business.

  • GA: Added new ServiceCIDR resources in your cluster. The Kubernetes control plane uses these resources to manage the IP address ranges for your Services automatically.

  • Preview: Added support for fast failover for the egress NAT gateway running in high availability. This feature improves both the reliability and throughput of egress traffic.

  • GA: Added support for skip minor version cluster upgrades. You can directly upgrade your cluster control plane nodes (and entire cluster if worker node pools aren’t pinned at a lower version) to two minor versions above the current version. Added the bmctl upgrade intermediate-version command to print the intermediate version for a skip minor version upgrade.

  • Preview: Added support for advanced networking features on admin clusters. This capability lets you specify multiple network interfaces for Pods and lets you use bundled load balancing with BGP on your version 1.34 or higher admin clusters.

Changed

The following functional changes were made in 1.34.0-gke.566:

  • Enabled Ansible SSH pipelining by default to improve performance. This can be disabled by adding the annotation preview.baremetal.cluster.gke.io/ansible-ssh-pipelining: "disable" to the Cluster custom resource.

  • During cluster creation and update operations, NodePools validate IP address uniqueness against all existing underlay Nodes, regardless of status. Node deprovisioning is blocked until the associated Kubernetes Node is deleted, unless the node-deletion-timeout-seconds annotation is present on the cluster.

  • Upgraded containerd from 1.7 to 2.0.

  • Registry mirror configuration information has been migrated to the hosts.toml containerd config file.

  • Upgrade preflight checks now validate PodDisruptionBudgets.

  • GKE Identity Service has been migrated from a Deployment to a DaemonSet for improved reliability on control plane nodes.

  • Added support for Red Hat Enterprise Linux 9.6.

  • Removed support for Red Hat Enterprise Linux 9.2 as it is beyond the Red Hat support window.

  • Added support for the 6.14 kernel package is supported for use with Ubuntu 24.04 LTS.

Fixed

The following issues were fixed in 1.34.0-gke.566:

  • Updated the bmctl restore command so that it restores the Node Problem Detector systemd Service for admin clusters.

  • Fixed the etcd-cleanup job timeout issue caused by the use of incorrect certificates.

  • Fixed an issue where the cluster restore process leaves the Kubelet certificate files as regular files instead of symbolic links, preventing certificate rotation.

Issue

For information about the latest known issues, see Google Distributed Cloud for bare metal known issues in the Troubleshooting section.

Google Kubernetes Engine

Feature

GKE Autopilot now supports N4A machine types in Public Preview, available on clusters running version 1.34.1-gke.3403001 or later.

Looker Studio

Feature

Pro feature: Share and schedule reports with Slack

You can now send Looker Studio reports to Slack channels and Slack users on your Slack workspaces. This feature is now out of preview, and remains available only to Pro users.

Feature

Expand tables to show all rows

When downloading or scheduling a report, you can now configure table visualizations to expand to display up to 2,000 rows.

Feature

Partner connection launch update

The following partner connectors have been added to the Looker Studio Connector Gallery:

Security Command Center

Feature

Security Command Center Risk Engine supports Cloud Build Attack Paths with Cloud Build Resources supported in the high-value resource set.

Spanner

Feature

Spanner Data Boost now includes a new quota, Data Boost concurrent requests in milli-operations per second per region, which applies more fine-grained control over how multiple concurrent requests for your project share Data boost resources. Instead of counting 1 request against 1 unit of quota under the existing concurrency quota regime, Data Boost now splits a request at a granularity of 1/1000, allowing for a greater number of concurrent requests to make progress. For more information, see Quotas and limits.

Source: Google Cloud Platform

Latest Posts

Pass It On
Leave a Comment

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *