Prepare for the Amazon AWS Certified Developer - Associate exam with our extensive collection of questions and answers. These practice Q&A are updated according to the latest syllabus, providing you with the tools needed to review and test your knowledge.
QA4Exam focus on the latest syllabus and exam objectives, our practice Q&A are designed to help you identify key topics and solidify your understanding. By focusing on the core curriculum, These Questions & Answers helps you cover all the essential topics, ensuring you're well-prepared for every section of the exam. Each question comes with a detailed explanation, offering valuable insights and helping you to learn from your mistakes. Whether you're looking to assess your progress or dive deeper into complex topics, our updated Q&A will provide the support you need to confidently approach the Amazon DVA-C02 exam and achieve success.
A developer is testing an AWS Lambda function that processes messages from an Amazon SQS queue. Some messages reappear in the queue while they are still being processed.
What should the developer do to correct this behavior?
Comprehensive and Detailed Explanation (250--300 words):
Amazon SQS uses a visibility timeout to prevent other consumers from processing a message while it is being handled. If a Lambda function does not complete processing before the visibility timeout expires, the message becomes visible again and can be processed a second time.
AWS documentation states that the SQS visibility timeout should be greater than the maximum Lambda execution time. If not, messages may be delivered multiple times, even though processing is still in progress.
Increasing the Lambda timeout or memory does not directly affect message visibility. Increasing batch size can worsen the problem by increasing processing time.
A developer is building an application on AWS. The application has an Amazon API Gateway API that sends requests to an AWS Lambda function. The API is experiencing increased latency because the Lambda function has limited available CPU to fulfill the requests.
Before the developer deploys the API into production, the developer must configure the Lambda function to have more CPU.
Which solution will meet this requirement?
In AWS Lambda, CPU power scales proportionally with the memory configuration. Lambda does not let you directly set ''CPU cores'' as a standalone setting in the general case; instead, increasing the function's configured memory increases the CPU allocation (and other resources such as network throughput) available to the function. Therefore, to reduce latency caused by insufficient CPU, the developer should increase the function's memory setting.
Option B directly addresses the CPU limitation in the supported Lambda configuration model. This is a common performance tuning approach: raise memory, benchmark, and find the optimal cost/performance point.
Option A is not the right lever for most Lambda functions because Lambda compute is configured via memory (and architecture), not by directly raising a ''vCPU quota'' per function in a typical tuning workflow.
Option C increases /tmp storage capacity and helps when dealing with large temporary files, not CPU availability.
Option D increases the maximum runtime allowed per invocation, but it does not give the function more CPU and will not reduce latency caused by CPU starvation.
Therefore, increasing the allocated memory is the correct way to increase CPU for a Lambda function.
A company offers a business-to-business software service that runs on dedicated infrastructure deployed in each customer's AWS account. Before a feature release, the company needs to run integration tests on real AWS test infrastructure. The test infrastructure consists of Amazon EC2 instances and an Amazon RDS database.
A developer must set up a continuous delivery process that will provision the test infrastructure across the different AWS accounts. The developer then must run the integration tests.
Which solution will meet these requirements with the LEAST administrative effort?
A company wants to use AWS AppConfig to gradually deploy a new feature to 15% of users to test the feature before a full deployment.
Which solution will meet this requirement with the LEAST operational overhead?
AWS AppConfig Feature Flags are designed to release features safely with minimal code changes. The lowest operational overhead approach is to use a single feature flag and configure it to deliver the feature to a percentage of users through built-in targeting rules and variants.
With option C, the developer creates one AppConfig feature flag and defines a variant that represents the new feature behavior (for example, enabled=true or a ''newExperience'' variant). Then the developer creates a rule in AppConfig to target 15% of users. AppConfig feature flags support rules that can evaluate attributes (such as user IDs or other contextual attributes supplied by the application) and apply percentage-based rollout. This avoids building and maintaining custom rollout logic in the application.
Option A and D both require implementing and maintaining custom traffic splitting in code, which increases complexity and operational burden, and risks inconsistent behavior across clients/services.
Option B adds extra configuration overhead and complexity by managing multiple feature flags for ''groups'' instead of using a single flag with a variant and rule. A single flag with variants is the clean, scalable pattern.
Therefore, create one AppConfig feature flag, define a variant, and configure a rule to target 15% of users.
A developer is setting up AWS CodePipeline for a new application. During each build, the developer must generate a test report.
Which solution will meet this requirement?
In CodePipeline, the service designed to run builds, execute unit/integration tests, and produce build artifacts (including test reports) is AWS CodeBuild. CodeBuild runs commands specified in a buildspec.yml file and supports reporting outputs such as test results (for example, JUnit XML), code coverage, and other build metadata.
Option A is correct because the developer can configure the CodeBuild project to run the test suite during the build phase and configure the buildspec to publish test report files. This integrates naturally into CodePipeline as a build stage, and the reports are generated consistently on each pipeline execution.
Option B is incorrect: CodeDeploy is for deploying applications (in-place/blue-green) and lifecycle hooks, not for standard build/test report generation.
Option C increases operational overhead by managing an EC2 build server, which is unnecessary when CodeBuild provides managed build environments.
Option D is unrelated: CodeArtifact is a package repository, not a test reporting solution.
Therefore, use CodeBuild and configure test reports via the buildspec.
Full Exam Access, Actual Exam Questions, Validated Answers, Anytime Anywhere, No Download Limits, No Practice Limits
Get All 519 Questions & Answers