Amazon OpenSearch Ingestion now supports batch AI inference

Amazon OpenSearch Ingestion now supports batch AI inference

You can now perform batch AI inference within Amazon OpenSearch Ingestion pipelines to efficiently enrich and ingest large datasets for Amazon OpenSearch Service domains.

Previously, customers used OpenSearch’s AI connectors to Amazon Bedrock, Amazon SageMaker, and 3rd-party services for real-time inference. Inferences generate enrichments such as vector embeddings, predictions, translations, and recommendations to power AI use cases. Real-time inference is ideal for low-latency requirements such as streaming enrichments. Batch inference is ideal for enriching large datasets offline, delivering higher performance and cost efficiency. You can now use the same AI connectors with Amazon OpenSearch Ingestion pipelines as an asynchronous batch inference job to enrich large datasets such as generating and ingesting up to billions of vector embeddings.

This feature is available in all regions that support Amazon OpenSearch Ingestion and 2.17+ domains. Learn more from the documentation.

Categories: marketing:marchitecture/analytics,marketing:marchitecture/artificial-intelligence,marketing:marchitecture/databases,general:products/amazon-opensearch-service

Source: Amazon Web Services



Latest Posts

Pass It On
Leave a Comment

Comments

No comments yet. Why don’t you start the discussion?

    Leave a Reply

    Your email address will not be published. Required fields are marked *