Today, AWS announces the general availability of Amazon Elastic Compute Cloud (Amazon EC2) P6-B300 instances, accelerated by NVIDIA Blackwell Ultra B300 GPUs. Amazon EC2 P6-B300 instances provide 8x NVIDIA Blackwell Ultra GPUs with 2.1 TB high bandwidth GPU memory, 6.4 Tbps EFA networking, 300 Gbps dedicated ENA throughput, and 4 TB of system memory.
P6-B300 instances deliver 2x networking bandwidth, 1.5x GPU memory size, and 1.5x GPU TFLOPS (at FP4, without sparsity) compared to P6-B200 instances, making them well suited to train and deploy large trillion-parameter foundation models (FMs) and large language models (LLMs) with sophisticated techniques. The higher networking and larger memory deliver faster training times and more token throughput for AI workloads.
P6-B300 instances are now available in the p6-b300.48xlarge size through Amazon EC2 Capacity Blocks for ML and Savings Plans in the following AWS Region: US West (Oregon). For on-demand reservation of P6-B300 instances, please reach out to your account manager.
To learn more about P6-B300 instances, visit Amazon EC2 P6 instances.
Categories: general:products/amazon-ec2,marketing:marchitecture/compute,marketing:marchitecture/artificial-intelligence
Source: Amazon Web Services



![External authentication methods (EAM) – Public preview update [MC1192252] 4 pexels apasaric 325185](https://mwpro.co.uk/wp-content/uploads/2025/06/pexels-apasaric-325185-150x150.webp)
![(Updated) New Copilot button in file previewer for OneDrive and SharePoint [MC1182707] 5 pexels googledeepmind 17486101](https://mwpro.co.uk/wp-content/uploads/2025/06/pexels-googledeepmind-17486101-150x150.webp)
