AWS Clean Rooms supports configurable compute size for PySpark jobs

AWS Clean Rooms supports configurable compute size for PySpark jobs

AWS Clean Rooms now supports configurable compute size for PySpark, offering customers the flexibility to customize and allocate resources to run PySpark jobs based on their performance, scale, and cost requirements. With this launch, customers can specify the instance type and cluster size at job runtime for each analysis that uses PySpark, the Python API for Apache Spark. For example, customers can use large instance configurations to achieve the performance needed for their complex data sets and analyses, or smaller instances to optimize costs.

AWS Clean Rooms helps companies and their partners easily analyze and collaborate on their collective datasets without revealing or copying one another’s underlying data. For more information about the AWS Regions where AWS Clean Rooms is available, see the AWS Regions table. To learn more about collaborating with AWS Clean Rooms, visit AWS Clean Rooms.

Categories: general:products/aws-clean-rooms,marketing:marchitecture/analytics

Source: Amazon Web Services



Latest Posts

Pass It On
Leave a Comment

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *