Today, AWS announces the general availability of Amazon Elastic Compute Cloud (Amazon EC2) P6-B300 instances, accelerated by NVIDIA Blackwell Ultra B300 GPUs. Amazon EC2 P6-B300 instances provide 8x NVIDIA Blackwell Ultra GPUs with 2.1 TB high bandwidth GPU memory, 6.4 Tbps EFA networking, 300 Gbps dedicated ENA throughput, and 4 TB of system memory.
P6-B300 instances deliver 2x networking bandwidth, 1.5x GPU memory size, and 1.5x GPU TFLOPS (at FP4, without sparsity) compared to P6-B200 instances, making them well suited to train and deploy large trillion-parameter foundation models (FMs) and large language models (LLMs) with sophisticated techniques. The higher networking and larger memory deliver faster training times and more token throughput for AI workloads.
P6-B300 instances are now available in the p6-b300.48xlarge size through Amazon EC2 Capacity Blocks for ML and Savings Plans in the following AWS Region: US West (Oregon). For on-demand reservation of P6-B300 instances, please reach out to your account manager.
To learn more about P6-B300 instances, visit Amazon EC2 P6 instances.
Categories: general:products/amazon-ec2,marketing:marchitecture/compute,marketing:marchitecture/artificial-intelligence
Source: Amazon Web Services
Latest Posts
- AWS CloudFormation accelerates dev-test cycle with early validation and simplified troubleshooting

- Safely handle configuration drift with AWS CloudFormation drift-aware change sets

- Amazon Bedrock introduces Priority and Flex inference service tiers

- Amazon RDS for Oracle now supports October 2025 Release Update and Spatial Patch Bundle






