BigQuery
Feature
You can configure BigQuery sharing listings for multiple regions, which allows you to share datasets and linked replicas across global geographies simultaneously. For more information, see Create a listing. This feature is generally available (GA).
Breaking
Starting June 1, 2026, due to changes in Google Ads data retention policies, the BigQuery Data Transfer Service connectors for Google Ads, Search Ads 360, and Google Analytics 4 will stop populating data for backfill runs with dates earlier than 37 months from the current date.
For more information about the changes to the Google Ads data retention policies, see New Data Retention Policy for Google Ads starting June 1, 2026.
Cloud Composer
Announcement
Cloud Composer 2 environments can no longer be created in Johannesburg (africa-south1). We’re switching this region to supporting only Cloud Composer 3 environments. Existing Cloud Composer 2 environments in this region aren’t affected by this change.
Cloud Trace
Feature
The following remote MCP servers automatically generate a trace span for
tools/call operations. These spans can help you understand the behavior of
your agentic applications. For more information, see
Investigate MCP calls using Trace.
- Agent Search
- AlloyDB for PostgreSQL
- Google Security Operations
Gemini Enterprise Agent Platform
Fixed
Fixed an issue with Audio track extraction (Gemini Embedding 2 only) where the audio_track_extraction feature did not work. For more information, see Issue #504505771.
Google Distributed Cloud (software only) for VMware
Announcement
Google Distributed Cloud (software only) for VMware 1.35.0-gke.525 is now available for download. To upgrade, see Upgrade clusters. Google Distributed Cloud 1.35.0-gke.525 runs on Kubernetes v1.35.2-gke.300.
If you are using a third-party storage vendor, check the Google Distributed Cloud-ready storage partners document to make sure the storage vendor has already passed the qualification for this release.
After a release, it takes approximately 7 to 14 days for the version to become available for use with GKE On-Prem API clients: the Google Cloud console, the gcloud CLI, and Terraform.
Announcement
The following features were added in 1.35.0-gke.525:
Platform update to Kubernetes 1.35: This release updates the underlying Kubernetes version to 1.35.
- As part of the sunset of
cgroupsv1, the legacyubuntu,ubuntu_containerd, andcosOSImageTypeoptions are no longer supported in this release. - For more information on migrating to
cgroupsv2, see the Kubernetes documentation on migrating to cgroupv2. - This release also upgrades the container runtime, containerd, from version 2.0 to 2.1.
- As part of the sunset of
The Ubuntu image has been upgraded to 24.04 on all node types for 1.35.0-gke.525. When you upgrade your control plane and node pools, the nodes are automatically recreated with the new operating system image.
gkectlprints the Operation ID and Operation Type to the console after cluster operations.For advanced clusters, the default node pool update policy is changed to parallel instead of sequential. This applies to all advanced clusters (both new and existing upon upgrade). To customize or revert this behavior, use the
nodePoolUpdatePolicyandmaximumConcurrentNodePoolUpdatefields in the cluster configuration file.The default Docker bridge IP for advanced clusters has been changed to
169. 254.123.1/24instead of172.17.0/16. This change reduces the likelihood of conflicts with user-configured networks. If you use the172.17.0/16range for other purposes, cluster creation might fail due to this conflict.vsphere-csi-controllerin advanced clusters is deployed on the user cluster control plane nodes instead of worker nodes. This architectural change happens automatically during upgrade and does not impact resource sizing recommendations.
Fixed
The following issues were fixed in 1.35.0-gke.525:
- Fixed vulnerabilities listed in Vulnerability fixes.
- Resolved an issue that caused VMware cluster upgrades from non-advanced clusters to advanced clusters to get stuck. The system attempted to update immutable fields in the Hub membership. With this fix, the cluster operator preserves the original membership fields during the upgrade process instead of attempting to overwrite them so that the migration to an advanced cluster completes successfully.
- Fixed an issue in Advanced user clusters where the
cloud.google.com/ gke-nodepoollabel for workload node pools unexpectedly included an-npsuffix. This caused pods usingnodeSelectortargeting the original pool name (such as Apigee workloads) to fail to schedule. For clusters on older versions experiencing this issue, you can work around it by manually setting the expected label in the node pool configuration. - Fixed an issue where setting the deprecated
stackdriver.enableVPCfield totruein a cluster configuration file would block upgrades to an Advanced Cluster. Thestackdriver.enableVPCfield has been deprecated and its setting will be ignored during the upgrade validation process. For clusters on older versions experiencing this issue, you can work around it by removing the field or setting it tofalsein your configuration file before upgrading. - Fixed an issue where the node-problem-detector was incorrectly deployed onto
non-Advanced VMware clusters. This caused the containerd runtime to
continuously restart on affected nodes due to incompatible health check
configurations, leading to ETCD/CRI failures (such as errors connecting to
/run/containerd/containerd.sock) and unsuccessful cluster upgrades. - Fixed an issue where leading or trailing whitespaces in the proxy.url field, or spaces after commas in the proxy.noProxy list in the cluster configuration file, caused advanced cluster creation or upgrades to fail. This release adds validation to reject such malformed configurations before operations begin. For upgrades, logic has been added to automatically handle and clean up these spaces in the operator cluster state to prevent upgrade failures. If you are using an older version and encounter this issue, ensure that all proxy configuration fields are free of extraneous spaces.
- Fixed an issue where retrying the gkectl upgrade admin command after a
previous failure would fail with a “failed to create credential namespace in
bootstrap cluster” error. This occurred because the setup process failed to
handle resources that already existed from the previous attempt. This fix
resolves the issue described in
gkectl upgrade adminfails on retry with “AlreadyExists” errors in the bootstrap cluster, eliminating the need to manually delete conflicting resources from the bootstrap cluster before retrying. - Fixed an issue where the system’s root certificates were ignored when a custom CA certificate was configured for a registry mirror or private registry. This caused cluster creation or upgrades to fail with an x509: certificate signed by unknown authority error when attempting to pull images. The system honors both the custom CA and the system’s root certificates.
- Fixed an issue where vSphere VM creation could hang indefinitely, with the operation remaining stuck in the Creating phase and logs repeatedly reporting “VM creation in progress.” This fix introduces a one-hour timeout for VM creation and ensures the machine status is updated in Kubernetes during each reconciliation, eliminating the need to manually delete the stuck VM resource from the temporary bootstrap cluster to recover.
- Fixed an issue where upgrading non-advanced clusters with OIDC configuration to advanced clusters caused users to fail to log in via Anthos Identity Service (AIS) immediately after the upgrade.
Google Distributed Cloud (software only) for bare metal
Announcement
Google Distributed Cloud (software only) for bare metal 1.35.0-gke.525 is now available for download. To upgrade, see Upgrade clusters. Google Distributed Cloud for bare metal 1.35.0-gke.525 runs on Kubernetes v1.35.2-gke.300.
After a release, it takes approximately 7 to 14 days for the version to become available for installations or upgrades with the GKE On-Prem API clients: the Google Cloud console, the gcloud CLI, and Terraform.
If you use a third-party storage vendor, check the Google Distributed Cloud-ready storage partners document to make sure the storage vendor has already passed the qualification for this release of Google Distributed Cloud for bare metal.
Announcement
The following features were added in 1.35.0-gke.525:
Platform update to Kubernetes 1.35: This release updates the underlying Kubernetes version to 1.35.
- For customers using Red Hat Enterprise Linux (RHEL) 7 or 8, which default to
cgroupsv1, you must manually configure your operating system to enablecgroupsv2before upgrading. For instructions, see the Red Hat knowledge base article on enabling cgroup v2. - For more information on migrating to
cgroupsv2, see the Kubernetes documentation on migrating to cgroupv2. - This release upgrades the container runtime, containerd, from version 2.0 to 2.1.
- For customers using Red Hat Enterprise Linux (RHEL) 7 or 8, which default to
Added a periodic health check to detect stale secret and ConfigMap mounts on Google Kubernetes Engine pods. To account for normal propagation delays, a content mismatch is only reported as an error if the data remains stale for more than 5 minutes.
Upgraded the Ansible version to 2.18. This version requires Python 3.9 on target nodes. For customers using Red Hat Enterprise Linux, version 8.10 or later is required because the default Python version in earlier Red Hat 8 releases (Python 3.6) is not supported by Ansible 2.18.
You can use the header section of the cluster configuration file to specify registry mirrors for your clusters. This simplifies the management of registry mirrors and provides a more consistent configuration experience. For instructions on how to update or remove these settings, see the Registry Mirror documentation.
Preview Added support for EgressDSCP tagging. With this feature, you can mark IP headers with specific Differentiated Services Code Point (DSCP) values on packets leaving the cluster to prioritize network traffic. To use this feature, you must set
preview.baremetal.cluster.gke.io/traffic-selector:toenablein your cluster configuration and manage traffic selection using theEgressDSCPandTrafficSelectorcustom resources. For more information, see Configure EgressDSCP tagging.bmctlprints the Operation ID and OperationType to the console after cluster installation and upgrade operations.
Fixed
The following issues were fixed in 1.35.0-gke.525:
- Fixed vulnerabilities listed in Vulnerability fixes.
- Fixed an issue where node upgrades could hang indefinitely and bypass the
20-minute maintenance timeout. This issue occurred when a node contained
completed pods within a namespace that was in a
Terminatingstate. Because the Kubernetes Eviction API rejects operations in terminating namespaces, the cluster controller entered an infinite retry loop. The fix updates the drain process to skip eviction for pods in terminal phases, allowing the upgrade to proceed normally. - Fixed an issue where concurrent tasks on the same node failed when containerd
restarts. After the fix, tasks are locked and run sequentially to ensure each
task completes successfully before the next begins. Each lock is held for up
to 20 minutes or until the task reaches success or failure.
To bypass this safety mechanismrun and run tasks concurrently, add the
following annotation to your cluster:
baremetal.cluster.gke.io/concurrent-machine-update: "true". - Fixed an issue where Metrics API operations—including
kubectl top, Horizontal Pod Autoscaling, and Vertical Pod Autoscaling could fail with TLS verification errors during certificate authority rotation. This occurred because the leaf certificate was not immediately renewed when the certificate authority was rotated, causing a temporary mismatch between the trusted certificate authority bundle and the certificate presented by the metrics server. - Fixed an issue where Cluster CA rotation could hang indefinitely on self-managed clusters, with the bmctl command hanging at the “Trust CA Bundle completed in 0/X machines” stage. This occurred due to a state deadlock during the resource pivot operation (moving resources between management and bootstrap clusters). This fix resolves the deadlock, eliminating the need to manually update cluster fields or remove lock ConfigMaps to recover.
- Fixed an issue where temporary API server connectivity failures (such as network timeouts) caused the system to unnecessarily re-register and redeploy the GKE Connect agent. This fix prevents these temporary errors from resetting manual or system-applied customizations to the agent deployment, improving cluster stability.
- Fixed an issue where bmctl could fail to capture the full log for long-running operations, resulting in empty or incomplete job logs in the workspace. This occurred because a strict internal timeout stopped log streaming prematurely. The fix ensures that log streaming continues for the full duration of the operation’s pod lifecycle.
- Fixed an issue in the monitoring component of the cluster operator where delete operations could cause the operator to crash if the resource had no annotations. The fix ensures the system properly handles resources with empty annotation maps, preventing the crash.
- Fixed an issue where the anet-operator could be scheduled to an unreachable node and become stuck in a Pending state, eventually causing networking to fail. This occurred due to overly permissive scheduling rules. The fix restricts scheduling to prevent the operator from running on unreachable nodes and explicitly places it on control plane nodes to ensure reliability.
Google Kubernetes Engine
Feature
GKE Pod Snapshots is generally available on clusters that run version 1.35.3-gke.1234000 or later. For more information, see About GKE Pod snapshots.
Feature
In GKE Standard clusters, live migration is now supported on Confidential GKE Nodes that use C3D machine series with AMD SEV enabled.
Fixed
A fix is available for an issue that caused incomplete file reads and premature end-of-file (EOF) errors when you used the Cloud Storage FUSE CSI driver on ARM64 nodes that use 64 KiB page sizes, such as A4X and A4X Max instances. This issue occurred because the kernel read-ahead mechanism triggered read requests that exceeded the capacity of the Cloud Storage FUSE layer.
To resolve this issue, upgrade your cluster to one of the following versions:
- 1.33.11-gke.1019000 or later
- 1.34.6-gke.1154000 or later
- 1.35.2-gke.1485000 or later
Oracle Database@Google Cloud
Feature
For Exadata Database Service on Exascale infrastructure and Base Database Service, Oracle Database@Google Cloud supports the following regions and zones:
asia-south1-b-r1(Mumbai, India)asia-south2-b-r1(Delhi, India)
For a list of supported locations, see Supported regions and zones.
Secure Source Manager
Feature
You can now use CODEOWNERS files to define required reviewers for pull requests.
Virtual Private Cloud
Feature
Organization Policy Service custom constraints are available in General Availability for private services access connections. For more information, see Restrict private connections with organization policies.
Source: Google Cloud Platform


![Dynamics 365 Customer Service - Enable enhanced screen recording controls for admins [MC1302880] 3 pexels s n b m 827240 1773181](https://mwpro.co.uk/wp-content/uploads/2025/06/pexels-s-n-b-m-827240-1773181-150x150.webp)

