Uploaded on Apr 22, 2020
So many students have passed AWS Certified DevOps Engineer - Professional with the help of DOP-C01 Real Exam Dumps. If you are also interested in this certification then you can pass your exam with sure. You will be given guarantee to pass by the first attempt. In case of your failure your money will be returned to you because we don’t want to be paid for what doesn’t give benefit. Questions and answers provided at DumpsforSure are fully valid according to the exam requirements. You can use demo questions for quality check. We are also providing online testing engine for those who have gone through DOP-C01 dumps and now want to practice their knowledge. Practice makes a man perfect and this rule has been followed through this testing engine. If you want to take this certification you can contact us at DumpsforSure. https://www.dumpsforsure.com/amazon/dop-c01-dumps.html
2020 Valid Amazon DOP-C01 Exam Questions
Amazon Web Services
DOP-C01
[ Total Questions: 10]
https://www.dumpsforsure.com/amazon/dop-c01-dumps.html
Practice Test Amazon Web Services - DOP-C01
Question #:1
A company is building a web and mobile application that uses a serverless architecture powered by AWS
Lambda and Amazon API Gateway. The company wants to fully automate the backend Lambda deployment
based on code that is pushed to the appropriate environment branch in an AWS CodeCommit repository.
The deployment must have the following:
Separate environment pipelines for testing and production.
Automatic deployment that occurs for test environments only.
Which steps should be taken to meet these requirements?
A. Configure a new AWS CodePipeline service. Create a CodeCommit repository for each environment.
Set up CodePipeline to retrieve the source code from the appropriate repository. Set up a deployment
step to deploy the Lambda functions with AWS CloudFormation.
B. Create two AWS CodePipeline configurations for test and production environments. Configure the
production pipeline to have a manual approval step. Create a CodeCommit repository for each
environment. Set up each CodePipeline to retrieve the source code from the appropriate repository. Set
up the deployment step to deploy the Lambda functions with AWS CloudFormation.
C. Create two AWS CodePipeline configurations for test and production environments. Configure the
production pipeline to have a manual approval step. Create one CodeCommit repository with a branch
for each environment. Set up each CodePipeline to retrieve the source code from the appropriate branch
in the repository. Set up the deployment step to deploy the Lambda functions with AWS
CloudFormation.
D. Create an AWS CodeBuild configuration for test and production environments. Configure the
production pipeline to have a manual approval step. Create one CodeCommit repository with a branch
for each environment. Push the Lambda function code to an Amazon S3 bucket. Set up the deployment
step to deploy the Lambda functions from the S3 bucket.
Answer: B
Question #:2
A company used AWS CloudFormation to deploy a three-tier web application that stores data in an Amazon
RDS MySOL Multi-AZ DB instance. A DevOps Engineer must upgrade the RDS instance to the latest major
version of MySQL while incurring minimal downtime.
How should the Engineer upgrade the instance while minimizing downtime?
A. Update the EngineVersion property of the AWS::RDS:: DBInstance resource type in the
CloudFormation template to the latest desired version. Launch a second stack and make the new RDS
instance a read replica.
B. Update the DBEngineVersion property of the AWS: : RDS : :DBInstance resource type in the
1 of 7
Practice Test Amazon Web Services - DOP-C01
B.
CloudFormation template to the latest desired version. Perform an Update Stack operation. Create a new
RDS Read Replicas resource with the same properties as the instance to be upgraded. Perform a second
Update Stack operation.
C. Update the DBEngineVersion property of the AWS: :RDS: :DB:Instance resource type in the
CloudFormation template to the latest desired version. Create a new RDS Read Replicas resource with
the same properties as the instance to be upgraded. Perform an Update Stack operation.
D. Update the EngineVersion property of the AWS :: RDS :: DBInstance resource type in the
CloudFormation template to the latest version, and perform an Update Stack operation.
Answer: A
Question #:3
An Application team is refactoring one of its internal tools to run in AWS instead of on-premises hardware.
All of the code is currently written in Python and is standalone. There is also no external state store or
relational database to be queried.
Which deployment pipeline incurs the LEAST amount of changes between development and production?
A. Developers should use Docker for local development. When dependencies are changed and a new
container is ready, use AWS CodePipeline and AWS CodeBuild to perform functional tests and then
upload the new container to Amazon ECR. Use AWS CloudFormation with the custom container to
deploy the new Amazon ECS.
B. Developers should use Docker for local development. Use AWS SMS to import these containers as
AMIs for Amazon EC2 whenever dependencies are updated. Use AWS CodePipeline to test new code
changes against the Auto Scaling group.
C. Developers should use their native Python environment. When Dependencies are changed and a new
container is ready, use AWS CodePipeline and AWS CodeBuild to perform functional tests and then
upload the new container to the Amazon ECR. Use AWS CloudFormation with the custom container to
deploy the new Amazon ECS.
D. Developers should use their native Python environment. When Dependencies are changed and a new
code is ready, use AWS CodePipeline and AWS CodeBuild to perform functional tests and then upload
the new container to the Amazon ECR. Use CodePipeline and CodeBuild with the custom container to
test new code changes inside AWS Elastic Beanstalk.
Answer: A
Question #:4
A company runs a production application workload in a single AWS account that uses Amazon Route 53,
AWS Elastic Beanstalk, and Amazon RDS. In the event of a security incident, the Security team wants the
application workload to fail over to a new AWS account. The Security team also wants to block all access to
2 of 7
Practice Test Amazon Web Services - DOP-C01
the original account immediately, with no access to any AWS resources in the original AWS account, during
forensic analysis.
What is the most cost-effective way to prepare to fail over to the second account prior to a security incident?
A. Migrate the Amazon Route 53 configuration to a dedicated AWS account. Mirror the Elastic Beanstalk
configuration in a different account. Enable RDS Database Read Replicas in a different account.
B. Migrate the Amazon Route 53 configuration to a dedicated AWS account. Save/copy the Elastic
Beanstalk configuration files in a different AWS account. Copy snapshots of the RDS Database to a
different account.
C. Save/copy the Amazon Route 53 configurations for use in a different AWS account after an incident.
Save/copy Elastic Beanstalk configuration files to a different account. Enable the RDS database read
replica in a different account.
D. Save/copy the Amazon Route 53 configurations for use in a different AWS account after an incident.
Mirror the configuration of Elastic Beanstalk in a different account. Copy snapshots of the RDS
database to a different account.
Answer: A
Question #:5
A company is creating a software solution that executes a specific parallel-processing mechanism. The
software can scale to tens of servers in some special scenarios. This solution uses a proprietary library that is
license-based, requiring that each individual server have a single, dedicated license installed. The company has
200 licenses and is planning to run 200 server nodes concurrently at most.
The company has requested the following features:
• A mechanism to automate the use of the licenses at scale.
• Creation of a dashboard to use in the future to verify which licenses are available at any moment.
What is the MOST effective way to accomplish these requirements'?
A. Upload the licenses to a private Amazon S3 bucket. Create an AWS CloudFormation template with a
Mappings section for the licenses. In the template, create an Auto Scaling group to launch the servers. In
the user data script, acquire an available license from the Mappings section. Create an Auto Scaling
lifecycle hook, then use it to update the mapping after the instance is terminated.
B. Upload the licenses to an Amazon DynamoDB table. Create an AWS CloudFormation template that
uses an Auto Scaling group to launch the servers. In the user data script, acquire an available license
from the DynamoDB table. Create an Auto Scaling lifecycle hook, then use it to update the mapping
after the instance is terminated.
C. Upload the licenses to a private Amazon S3 bucket. Populate an Amazon SQS queue with the list of
licenses stored in S3. Create an AWS CloudFormation template that uses an Auto Scaling group to
3 of 7
Practice Test Amazon Web Services - DOP-C01
C.
launch the servers. In the user data script acquire an available license from SQS. Create an Auto Scaling
lifecycle hook, then use it to put the license back in SQS after the instance is terminated.
D. Upload the licenses to an Amazon DynamoDB table. Create an AWS CLI script to launch the servers by
using the parameter --count, with min:max instances to launch. In the user data script, acquire an
available license from the DynamoDB table. Monitor each instance and, in case of failure, replace the
instance, then manually update the DynamoDB table.
Answer: D
Question #:6
A healthcare provider has a hybrid architecture that includes 120 on-premises VMware servers running
RedHat and 50 Amazon EC2 instances running Amazon Linux. The company is in the middle of an all-in
migration to AWS and wants to implement a solution for collecting information from the on-premises virtual
machines and the EC2 instances for data analysis. The information includes:
- Operating system type and version
- Data for installed applications
- Network configuration information, such as MAC and IP addresses
- Amazon EC2 instance AMI ID and IAM profile
How can these requirements be met with the LEAST amount of administration?
A. Write a shell script to run as a cron job on EC2 instances to collect and push the data to Amazon S3. For
on-premises resources, use VMware vSphere to collect the data and write it into a file gateway for
storing the data in S3. Finally, use Amazon Athena on the S3 bucket tor analytics.
B. Use a script on the on-premises virtual machines as well as the EC2 instances to gather and push the
data into Amazon S3, and then use Amazon Athena for analytics.
C. Install AWS Systems Manager agents on both the on-premises virtual machines and the EC2 instances.
Enable inventory collection and configure resource data sync to an Amazon S3 bucket to analyze the
data with Amazon Athena.
D. Use AWS Application Discovery Service for deploying Agentless Discovery Connector in the VMware
environment and Discovery Agents on the EC2 instances for collecting the data. Then use the AWS
Migration Hub Dashboard for analytics.
Answer: C
Question #:7
A company wants to adopt a methodology for handling security threats from leaked and compromised IAM
4 of 7
Practice Test Amazon Web Services - DOP-C01
access keys. The DevOps Engineer has been asked to automate the process of acting upon compromised
access keys, which includes identifying users, revoking their permissions, and sending a notification to the
Security team.
Which of the following would achieve this goal?
A. Use the AWS Trusted Advisor generated security report for access keys. Use Amazon EMR to run
analytics on the report. Identify compromised IAM access keys and delete them. Use Amazon
CloudWatch with an EMR Cluster State Change event to notify the Security team.
B. Use AWS Trusted Advisor to identify compromised access keys. Create an Amazon CloudWatch
Events rule with Trusted Advisor as the event source, and AWS Lambda and Amazon SNS as targets.
Use AWS Lambda to delete compromised IAM access keys and Amazon SNS to notify the Security
team.
C. Use the AWS Trusted Advisor generated security report for access keys. Use AWS Lambda to scan
through the report. Use scan result inside AWS Lambda and delete compromised IAM access keys. Use
Amazon SNS to notify the Security team.
D. Use AWS Lambda with a third-party library to scan for compromised access keys. Use scan result inside
AWS Lambda and delete compromised IAM access keys. Create Amazon CloudWatch custom metrics
for compromised keys. Create a CloudWatch alarm on the metrics to notify the Security team.
Answer: B
Explanation
Reference https://d0.awsstatic.com/whitepapers/aws-security-whitepaper.pdf
Question #:8
A DevOps Engineer uses Docker container technology to build an image-analysis application. The application
often sees spikes in traffic. The Engineer must automatically scale the application in response to customer
demand while maintaining cost effectiveness and minimizing any impact on availability.
What will allow the FASTEST response to spikes in traffic while fulfilling the other requirements?
A. Create an Amazon ECS cluster with the container instances in an Auto Scaling group. Configure the
ECS service to use Service Auto Scaling. Set up Amazon CloudWatch alarms to scale the ECS service
and cluster.
B. Deploy containers on an AWS Elastic Beanstalk Multicontainer Docker environment. Configure Elastic
Beanstalk to automatically scale the environment based on Amazon CloudWatch metrics.
C. Create an Amazon ECS cluster using Spot instances. Configure the ECS service to use Service Auto
Scaling. Set up Amazon CloudWatch alarms to scale the ECS service and cluster.
D. Deploy containers on Amazon EC2 instances. Deploy a container scheduler to schedule containers onto
EC2 instances. Configure EC2 Auto Scaling for EC2 instances based on available Amazon CloudWatch
metrics.
5 of 7
Practice Test Amazon Web Services - DOP-C01
Answer: D
Question #:9
For auditing, analytics, and troubleshooting purposes, a DevOps Engineer for a data analytics application
needs to collect all of the application and Linux system logs from the Amazon EC2 instances before
termination. The company, on average, runs 10,000 instances in an Auto Scaling group. The company requires
the ability to quickly find logs based on instance IDs and date ranges.
Which is the MOST cost-effective solution?
A. Create an EC2 Instance-terminate Lifecycle Action on the group, write a termination script for pushing
logs into Amazon S3, and trigger an AWS Lambda function based on S3 PUT to create a catalog of log
files in an Amazon DynamoDB table with the primary key being Instance ID and sort key being
Instance Termination Date.
B. Create an EC2 Instance-terminate Lifecycle Action on the group, write a termination script for pushing
logs into Amazon CloudWatch Logs, create a CloudWatch Events rule to trigger an AWS Lambda
function to create a catalog of log files in an Amazon DynamoDB table with the primary key being
Instance ID and sort key being Instance Termination Date.
C. Create an EC2 Instance-terminate Lifecycle Action on the group, create an Amazon CloudWatch Events
rule based on it to trigger an AWS Lambda function for storing the logs in Amazon S3, and create a
catalog of log files in an Amazon DynamoDB table with the primary key being Instance ID and sort key
being Instance Termination Date.
D. Create an EC2 Instance-terminate Lifecycle Action on the group, push the logs into Amazon Kinesis
Data Firehouse, and select Amazon ES as the destination for providing storage and search capability.
Answer: D
Question #:10
A DevOps Engineer must create a Linux AMI in an automated fashion. The newly created AMI identification
must be stored in a location where other build pipelines can access the new identification programmatically
What is the MOST cost-effective way to do this?
A. Build a pipeline in AWS CodePipeline to download and save the latest operating system Open
Virtualization Format (OVF) image to an Amazon S3 bucket, then customize the image using the
guestfish utility. Use the virtual machine (VM) import command to convert the OVF to an AMI, and
store the AMI identification output as an AWS Systems Manager parameter.
B. Create an AWS Systems Manager automation document with values instructing how the image should
be created. Then build a pipeline in AWS CodePipeline to execute the automation document to build the
AMI when triggered. Store the AMI identification output as a Systems Manager parameter.
6 of 7
Practice Test Amazon Web Services - DOP-C01
C. Build a pipeline in AWS CodePipeline to take a snapshot of an Amazon EC2 instance running the latest
version of the application. Then start a new EC2 instance from the snapshot and update the running
instance using an AWS Lambda function. Take a snapshot of the updated instance, then convert it to an
AMI. Store the AMI identification output in an Amazon DynamoDB table.
D. Launch an Amazon EC2 instance and install Packer. Then configure a Packer build with values defining
how the image should be created. Build a Jenkins pipeline to invoke the Packer build when triggered to
build an AMI. Store the AMI identification output in an Amazon DynamoDB table.
Answer: D
https://www.dumpsforsure.com/amazon/dop-c01-dumps.html
7 of 7
Comments