A company wants to create a chatbot by using a foundation model (FM) on Amazon Bedrock. The FM needs to access encrypted data that is stored in an Amazon S3 bucket.
The data is encrypted with Amazon S3 managed keys (SSE-S3).
The FM encounters a failure when attempting to access the S3 bucket data. Which solution will meet these requirements?
Correct Answer:A
Amazon Bedrock needs the appropriate IAM role with permission to access and decrypt data stored in Amazon S3. If the data is encrypted with Amazon S3 managed keys (SSE- S3), the role that Amazon Bedrock assumes must have the required permissions to access and decrypt the encrypted data.
✑ Option A (Correct): "Ensure that the role that Amazon Bedrock assumes has permission to decrypt data with the correct encryption key": This is the correct solution as it ensures that the AI model can access the encrypted data securely without changing the encryption settings or compromising data security.
✑ Option B: "Set the access permissions for the S3 buckets to allow public access" is incorrect because it violates security best practices by exposing sensitive data to the public.
✑ Option C: "Use prompt engineering techniques to tell the model to look for information in Amazon S3" is incorrect as it does not address the encryption and permission issue.
✑ Option D: "Ensure that the S3 data does not contain sensitive information" is incorrect because it does not solve the access problem related to encryption.
AWS AI Practitioner References:
✑ Managing Access to Encrypted Data in AWS: AWS recommends using proper IAM roles and policies to control access to encrypted data stored in S3.
A company uses Amazon SageMaker for its ML pipeline in a production environment. The company has large input data sizes up to 1 GB and processing times up to 1 hour. The company needs near real-time latency.
Which SageMaker inference option meets these requirements?
Correct Answer:A
Real-time inference is designed to provide immediate, low-latency predictions, which is necessary when the company requires near real-time latency for its ML models. This option is optimal when there is a need for fast responses, even with large input data sizes and substantial processing times.
✑ Option A (Correct): "Real-time inference": This is the correct answer because it
supports low-latency requirements, which are essential for real-time applications where quick response times are needed.
✑ Option B: "Serverless inference" is incorrect because it is more suited for intermittent, small-scale inference workloads, not for continuous, large-scale, low- latency needs.
✑ Option C: "Asynchronous inference" is incorrect because it is used for workloads that do not require immediate responses.
✑ Option D: "Batch transform" is incorrect as it is intended for offline, large-batch processing where immediate response is not necessary.
AWS AI Practitioner References:
✑ Amazon SageMaker Inference Options: AWS documentation describes real-time inference as the best solution for applications that require immediate prediction results with low latency.
A student at a university is copying content from generative AI to write essays. Which challenge of responsible generative AI does this scenario represent?
Correct Answer:C
The scenario where a student copies content from generative AI to write essays represents the challenge of plagiarism in responsible AI use.
✑ Plagiarism:
✑ Why Option C is Correct:
✑ Why Other Options are Incorrect:
A company is building an ML model to analyze archived data. The company must perform inference on large datasets that are multiple GBs in size. The company does not need to access the model predictions immediately.
Which Amazon SageMaker inference option will meet these requirements?
Correct Answer:A
Batch transform in Amazon SageMaker is designed for offline processing of large datasets. It is ideal for scenarios where immediate predictions are not required, and the inference can be done on large datasets that are multiple gigabytes in size. This method processes data in batches, making it suitable for analyzing archived data without the need for real- time access to predictions.
✑ Option A (Correct): "Batch transform": This is the correct answer because batch
transform is optimized for handling large datasets and is suitable when immediate access to predictions is not required.
✑ Option B: "Real-time inference" is incorrect because it is used for low-latency, real-
time prediction needs, which is not required in this case.
✑ Option C: "Serverless inference" is incorrect because it is designed for small-scale, intermittent inference requests, not for large batch processing.
✑ Option D: "Asynchronous inference" is incorrect because it is used when immediate predictions are required, but with high throughput, whereas batch transform is more suitable for very large datasets.
AWS AI Practitioner References:
✑ Batch Transform on AWS SageMaker: AWS recommends using batch transform for large datasets when real-time processing is not needed, ensuring cost- effectiveness and scalability.
A company is building a customer service chatbot. The company wants the chatbot to improve its responses by learning from past interactions and online resources.
Which AI learning strategy provides this self-improvement capability?
Correct Answer:B
Reinforcement learning allows a model to learn and improve over time based on feedback from its environment. In this case, the chatbot can improve its responses by being rewarded for positive customer feedback, which aligns well with the goal of self- improvement based on past interactions and new information.
✑ Option B (Correct): "Reinforcement learning with rewards for positive customer
feedback": This is the correct answer as reinforcement learning enables the chatbot to learn from feedback and adapt its behavior accordingly, providing self- improvement capabilities.
✑ Option A: "Supervised learning with a manually curated dataset" is incorrect
because it does not support continuous learning from new interactions.
✑ Option C: "Unsupervised learning to find clusters of similar customer inquiries" is incorrect because unsupervised learning does not provide a mechanism for improving responses based on feedback.
✑ Option D: "Supervised learning with a continuously updated FAQ database" is incorrect because it still relies on manually curated data rather than self- improvement from feedback.
AWS AI Practitioner References:
✑ Reinforcement Learning on AWS: AWS provides reinforcement learning
frameworks that can be used to train models to improve their performance based on feedback.
A company has terabytes of data in a database that the company can use for business analysis. The company wants to build an AI-based application that can build a SQL query from input text that employees provide. The employees have minimal experience with technology.
Which solution meets these requirements?
Correct Answer:A
Generative Pre-trained Transformers (GPT) are suitable for building an AI-based application that can generate SQL queries from natural language input provided by employees.
✑ GPT for Natural Language Processing:
✑ Why Option A is Correct:
✑ Why Other Options are Incorrect: