Amazon Bedrock Guardrails adds support for coding use cases

Amazon Bedrock Guardrails adds support for coding use cases

AWS announced expanded capabilities in Amazon Bedrock Guardrails for code-related use cases, enabling customers to protect against harmful content in code while building generative AI applications. This new capability allows customers to leverage existing safeguards offered by Bedrock Guardrails including content filters, denied topics, and sensitive information filters to detect intent to inject malicious code, detect and prevent prompt leakages, and help protect against introducing personally identifiable information (PII) within code.

With expanded support for code-related use cases, Amazon Bedrock Guardrails now provides customers with safeguards against harmful content introduced within code elements, including comments, variable and function names, and string literals. Content filters (with standard tier) in Bedrock Guardrails now detect and filter such harmful content in code in the same way as text and image content protection. Additionally, Bedrock Guardrails offers enhanced protection with prompt leakage detection with standard tier, helping detect and prevent unintended disclosure of information from system prompts in model responses that could compromise intellectual property. Furthermore, denied topics (with standard tier) and sensitive information filters with Bedrock Guardrails now help safeguard against vulnerabilities using code within topics and help prevent inclusion of PII within code structures.

The expanded capabilities for code-related cases is available in all AWS Regions where Amazon Bedrock Guardrails is supported. Customers can access the service through the Amazon Bedrock console, as well as the supported APIs.

To learn more, read the launch blog, technical documentation, and the Bedrock Guardrails product page.

Categories: marketing:marchitecture/artificial-intelligence,general:products/amazon-bedrock

Source: Amazon Web Services



Latest Posts

Pass It On
Leave a Comment

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply