GCP Release Notes: May 11, 2026

GCP Release Notes: May 11, 2026

AlloyDB for PostgreSQL

Announcement

AlloyDB now offers extended support for clusters running major PostgreSQL versions that have reached their end-of-life (EOL) as defined by the PostgreSQL community. Extended support provides an additional three years of support after the end of regular support, giving you more time to plan and perform major version upgrades. For more information, see Extended support for AlloyDB for PostgreSQL.

Cloud SQL for MySQL

Feature

Cloud SQL for MySQL now supports regional endpoints for the Cloud SQL Admin API. This feature lets you direct your API calls to a region-specific endpoint, which ensures that your requests are handled within the specified region’s frontend infrastructure with some limitations, such as backend dependencies that may still have global components. This enhances data locality and helps meet strict compliance expectations. For more information, see Cloud SQL regional endpoints.

This feature is in Preview.

Cloud SQL for PostgreSQL

Feature

Cloud SQL for PostgreSQL now supports regional endpoints for the Cloud SQL Admin API. This feature lets you direct your API calls to a region-specific endpoint, which ensures that your requests are handled within the specified region’s frontend infrastructure with some limitations, such as backend dependencies that may still have global components. This enhances data locality and helps meet strict compliance expectations. For more information, see Cloud SQL regional endpoints.

This feature is in Preview.

Cloud SQL for SQL Server

Feature

Cloud SQL for SQL Server now supports regional endpoints for the Cloud SQL Admin API. This feature lets you direct your API calls to a region-specific endpoint, which ensures that your requests are handled within the specified region’s frontend infrastructure with some limitations, such as backend dependencies that may still have global components. This enhances data locality and helps meet strict compliance expectations. For more information, see Cloud SQL regional endpoints.

This feature is in Preview.

Cloud Trace

Feature

Google Cloud Observability has expanded the supported locations for observability buckets, which store your trace data, to include the following:

  • asia-northeast1
  • asia-southeast1
  • me-west2
  • southamerica-east1
  • us-west4

For a list of supported locations, see Locations for observability buckets.

Gemini Enterprise Agent Platform

Change

You can purchase Provisioned Throughput for Gemma 4. To learn more, see the list of supported open models.

Google Distributed Cloud (software only) for VMware

Announcement

Google Distributed Cloud (software only) for VMware 1.34.400-gke.88 is now available for download. To upgrade, see Upgrade clusters. Google Distributed Cloud 1.34.400-gke.88 runs on Kubernetes v1.34.6-gke.200.

If you are using a third-party storage vendor, check the Google Distributed Cloud-ready storage partners document to make sure the storage vendor has already passed the qualification for this release.

After a release, it takes approximately 7 to 14 days for the version to become available for use with GKE On-Prem API clients: the Google Cloud console, the gcloud CLI, and Terraform.

Fixed

The following issues were fixed in 1.34.400-gke.88:

  • Fixed vulnerabilities listed in Vulnerability fixes.
  • Fixed an issue where the gkectl check-config command failed during preflight checks when bundled ingress was disabled and the loadBalancer.vips.ingressVIP field was left blank. This failure occurred because the validation process incorrectly attempted to generate a network configuration for test VMs using the empty VIP, resulting in an invalid command (such as ip addr add /32) and causing test VM initialization to fail.
  • Resolved an issue that caused VMware cluster upgrades from non-advanced clusters to advanced clusters to get stuck. The system attempted to update immutable fields in the Hub membership. With this fix, the cluster operator preserves the original membership fields during the upgrade process instead of attempting to overwrite them so that the migration to an advanced cluster completes successfully.

Google Distributed Cloud (software only) for bare metal

Announcement

Google Distributed Cloud (software only) for bare metal 1.34.400-gke.88 is now available for download. To upgrade, see Upgrade clusters. Google Distributed Cloud for bare metal 1.34.400-gke.88 runs on Kubernetes v1.34.6-gke.200.

After a release, it takes approximately 7 to 14 days for the version to become available for installations or upgrades with the GKE On-Prem API clients: the Google Cloud console, the gcloud CLI, and Terraform.

If you use a third-party storage vendor, check the Google Distributed Cloud-ready storage partners document to make sure the storage vendor has already passed the qualification for this release of Google Distributed Cloud for bare metal.

Announcement

The following features were added in 1.34.400-gke.88:

  • Added a periodic health check to detect stale mounts of Secrets and ConfigMaps on pods. This helps identify rare scenarios where nodes serve outdated secret data after a rotation, which can lead to authentication failures. Currently enabled for GKE Identity Service pods, the check runs on each node and compares the locally cached volume content with the live data from the API server, reporting a mismatch only after a 5-minute grace period to allow for normal update delays.

Fixed

The following issues were fixed in 1.34.400-gke.88:

  • Fixed vulnerabilities listed in Vulnerability fixes.
  • Fixed an issue where, during the machine initialization phase, the etcd-events pod read the stale data directory when it started and attempted to reuse the old member ID to rejoin the cluster instead of the new one. Trying to use the old member ID to rejoin the cluster resulted in an infinite retry loop and caused the cluster to reject the connection. The fix ensures the /var/lib/etcd-events directory is cleared upon failure, and adds retry logic to kubeadm-reset to improve resiliency against transient API errors.
  • Fixed an issue where concurrent tasks on the same node failed when containerd restarts. After the fix, tasks are locked and run sequentially to ensure each task completes successfully before the next begins. Each lock is held for up to 20 minutes or until the task reaches success or failure. To bypass this safety mechanism, you can run tasks concurrently by adding baremetal.cluster.gke.io/concurrent-machine-update: "true" to your cluster.
  • Fixed an issue where node upgrades could hang indefinitely and bypass the 20-minute maintenance timeout. This issue occurred when a node contained completed pods within a namespace that was in a Terminating state. Because the Kubernetes Eviction API rejects operations in terminating namespaces, the cluster controller entered an infinite retry loop. The fix updates the drain process to skip eviction for pods in terminal phases, allowing the upgrade to proceed normally.

Security Command Center

Change

Compliance Manager can be enabled for a single project. For more information, see Enable Compliance Manager.

Change

New Standard tier activations at the organization level support the enhanced Standard tier features. New Standard tier activations at the project level continue to support Standard-legacy tier features. For more information, see Standard tier enhanced and automatically activated for some customers.

Source: Google Cloud Platform

Latest Posts

Pass It On
Leave a Comment

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply