Starting today, Amazon Elastic Cloud Compute (Amazon EC2) P6-B300 instances are available in the US East (N. Virginia) Region. P6-B300 instances provide 8xNVIDIA Blackwell Ultra GPUs with 2.1 TB high bandwidth GPU memory, 6.4 Tbps EFA networking, 300 Gbps dedicated ENA throughput, and 4 TB of system memory.
P6-B300 instances deliver 2x networking bandwidth, 1.5x GPU memory size, and 1.5x GPU TFLOPS (at FP4, without sparsity) compared to P6-B200 instances, making them well suited to train and deploy large trillion-parameter foundation models (FMs) and large language models (LLMs) with sophisticated techniques. The higher networking and larger memory deliver faster training times and more token throughput for AI workloads.
P6-B300 instances are now available in p6-b300.48xlarge size in the following AWS Regions: US West (Oregon), AWS GovCloud (US-East) and US East (N. Virginia). To learn more about P6-B300 instances, visit Amazon EC2 P6 instances.
Categories: marketing:marchitecture/compute,marketing:marchitecture/artificial-intelligence,general:products/amazon-ec2
Source: Amazon Web Services
Latest Posts
- Amazon EC2 P6-B300 instances are now available in the US East (N. Virginia) Region

- Amazon Bedrock AgentCore Memory announces metadata for long-term memory

- AWS Transfer Family web apps are now available in the AWS Asia Pacific (New Zealand) Region

- AWS Site-to-Site VPN now supports modifying tunnel bandwidth on existing VPN connections





