AWS Clean Rooms now supports configurable compute size for PySpark, offering customers the flexibility to customize and allocate resources to run PySpark jobs based on their performance, scale, and cost requirements. With this launch, customers can specify the instance type and cluster size at job runtime for each analysis that uses PySpark, the Python API for Apache Spark. For example, customers can use large instance configurations to achieve the performance needed for their complex data sets and analyses, or smaller instances to optimize costs.
AWS Clean Rooms helps companies and their partners easily analyze and collaborate on their collective datasets without revealing or copying one another’s underlying data. For more information about the AWS Regions where AWS Clean Rooms is available, see the AWS Regions table. To learn more about collaborating with AWS Clean Rooms, visit AWS Clean Rooms.
Categories: general:products/aws-clean-rooms,marketing:marchitecture/analytics
Source: Amazon Web Services
Latest Posts
- AWS HealthImaging launches additional metrics for monitoring data stores

- Amazon EC2 M7i instances are now available in the Israel (Tel Aviv) Region

- Announcing new high performance computing Amazon EC2 Hpc8a instances

- Cloudflare Fundamentals – Content encoding support for Markdown for Agents and other improvements






