Uploaded on Apr 20, 2020
Our experts have designed a suitable and reliable SAP-C01 exam dumps material for the candidates who are looking at AWS Certified Solutions Architect Professional. You can easily ace your exam now by preparing from this comprehensive material. The Questions & Answers are up-to-date and valid for the updated exam format. After practicing these questions, you will find all the questions in the actual exam familiar to you and you will write the answers very competently. We have noticed a good feedback for our stuff. Our experts are working all the time to make positive changes and improvements so all the data is in accordance with the latest exam format. Online Testing Engine will give you confidence about your knowledge. Practice on Testing Software and improve your skills. We are also offering money back guarantee for your satisfaction otherwise we assure you to pass by the first attempt if you follow the guidelines at DumpsforSure. https://www.dumpsforsure.com/amazon/sap-c01-dumps.html
2020 Amazon SAP-C01 dumps -SAP-C01 PDF Dumps - DumpsforSure
Amazon Web Services
SAP-C01
[ Total Questions: 10]
https://www.dumpsforsure.com/amazon/sap-c01-dumps.html
Amazon Web Services - SAP-C01
Question #:1 - (Exam Topic 2)
An organization is setting up an application on AWS to have both High Availability (HA) and Disaster
Recovery (DR). The organization wants to have both Recovery point objective (RPO) and Recovery time
objective (RTO) of 10 minutes.
Which of the below mentioned service configurations does not help the organization achieve the said RPO and
RTO?
A. Take a snapshot of the data every 10 minutes and copy it to the other region.
B. Use an elastic IP to assign to a running instance and use Route 53 to map the user's domain with that IP.
C. Create ELB with multi-region routing to allow automated failover when required.
D. Use an AMI copy to keep the AMI available in other regions.
Answer: C
Explanation
AWS provides an on demand, scalable infrastructure. AWS EC2 allows the user to launch On- Demand
instances and the organization should create an AMI of the running instance. Copy the AMI to another region
to enable Disaster Recovery (DR) in case of region failure. The organization should also use EBS for
persistent storage and take a snapshot every 10 minutes to meet Recovery time objective (RTO). They should
also setup an elastic IP and use it with Route 53 to route requests to the same IP. When one of the instances
fails the organization can launch new instances and assign the same EIP to a new instance to achieve High
Availability (HA). The ELB works only for a particular region and does not route requests across regions.
References:
Question #:2 - (Exam Topic 2)
You want to use AWS CodeDeploy to deploy an application to Amazon EC2 instances running within an
Amazon Virtual Private Cloud (VPC).
What criterion must be met for this to be possible?
A. The AWS CodeDeploy agent installed on the Amazon EC2 instances must be able to access only the
public AWS CodeDeploy endpoint.
B. The AWS CodeDeploy agent installed on the Amazon EC2 instances must be able to access only the
public Amazon S3 service endpoint.
C.
Amazon Web Services - SAP-C01
C. The AWS CodeDeploy agent installed on the Amazon EC2 instances must be able to access the public
AWS CodeDeploy and Amazon S3 service endpoints.
D. It is not currently possible to use AWS CodeDeploy to deploy an application to Amazon EC2 instances
running within an Amazon Virtual Private Cloud (VPC.)
Answer: C
Explanation
You can use AWS CodeDeploy to deploy an application to Amazon EC2 instances running within an Amazon
Virtual Private Cloud (VPC).
However, the AWS CodeDeploy agent installed on the Amazon EC2 instances must be able to access the
public AWS CodeDeploy and Amazon S3 service endpoints.
http://aws.amazon.com/codedeploy/faqs/
Amazon Web Services - SAP-C01
Topic 1, Exam Pool A
Question #:3 - (Exam Topic 1)
You deployed your company website using Elastic Beanstalk and you enabled log file rotation to S3. An
Elastic Map Reduce job is periodically analyzing the logs on S3 to build a usage dashboard that you share with
your CIO.
You recently improved overall performance of the website using Cloud Front for dynamic content delivery
and your website as the origin.
After this architectural change, the usage dashboard shows that the traffic on your website dropped by an order
of magnitude.
How do you fix your usage dashboard?
A. Enable Cloud Front to deliver access logs to S3 and use them as input of the Elastic Map Reduce job.
B. Turn on Cloud Trail and use trail log tiles on S3 as input of the Elastic Map Reduce job
C. Change your log collection process to use Cloud Watch ELB metrics as input of the Elastic Map Reduce
job
D. Use Elastic Beanstalk "Rebuild Environment" option to update log delivery to the Elastic Map Reduce
job.
E. Use Elastic Beanstalk "Restart App server(s)" option to update log delivery to the Elastic Map Reduce
job.
Answer: D
Explanation
References:
Question #:4 - (Exam Topic 1)
What does elasticity mean to AWS?
A. The ability to scale computing resources up easily, with minimal friction and down with latency.
B. The ability to scale computing resources up and down easily, with minimal friction.
C. The ability to provision cloud computing resources in expectation of future demand.
D. The ability to recover from business continuity events with minimal friction.
Amazon Web Services - SAP-C01
Answer: B
Amazon Web Services - SAP-C01
Topic 5, Exam Pool E
Question #:5 - (Exam Topic 5)
A three-tier web application runs on Amazon EC2 instances. Cron daemons are used to trigger scripts that
collect the web server, application, and database logs and send them to a centralized location every hour.
Occasionally, scaling events or unplanned outages have caused the instances to stop before the latest logs were
collected, and the log files were lost.
Which of the following options is the MOST reliable way of collecting and preserving the log files?
A. Update the cron jobs to run every 5 minutes instead of every hour to reduce the possibility of log
messages being lost in an outage.
B. Use Amazon CloudWatch Events to trigger Amazon Systems Manager Run Command to invoke the log
collection scripts more frequently to reduce the possibility of log messages being lost in an outage.
C. Use the Amazon CloudWatch Logs agent to stream log messages directly to CloudWatch Logs.
Configure the agent with a batch count of 1 to reduce the possibility of log messages being lost in an
outage.
D. Use Amazon CloudWatch Events to trigger AWS Lambda to SSH into each running instance and invoke
the log collection scripts more frequently to reduce the possibility of log messages being lost in an
outage.
Answer: C
Question #:6 - (Exam Topic 5)
A company is migrating its on-premises build artifact server to an AWS solution. The current system consists
of an Apache HTTP server that serves artifacts to clients on the local network, restricted by the perimeter
firewall. The artifact consumers are largely build automation scripts that download artifacts via anonymous
HTTP, which the company will be unable to modify within its migration timetable.
The company decides to move the solution to Amazon S3 static website hosting. The artifact consumers will
be migrated to Amazon EC2 instances located within both public and private subnets in a virtual private cloud
(VPC).
Which solution will permit the artifact consumers to download artifacts without modifying the existing
automation scripts?
A. Create a NAT gateway within a public subnet of the VPC. Add a default route pointing to the NAT
gateway into the route table associated with the subnets containing consumers. Configure the bucket
policy to allow the s3:ListBucket and s3:GetObject actions using the condition IpAddress and the
condition key aws:SourceIp matching the elastic IP address if the NAT gateway.
B. Create a VPC endpoint and add it to the route table associated with subnets containing consumers.
Amazon Web Services - SAP-C01
B.
Configure the bucket policy to allow s3:ListBucket and s3:GetObject actions using the condition
StringEquals and the condition key aws:sourceVpce matching the identification of the VPC endpoint.
C. Create an IAM role and instance profile for Amazon EC2 and attach it to the instances that consume
build artifacts. Configure the bucket policy to allow the s3:ListBucket and s3:GetObjects actions for the
principal matching the IAM role created.
D. Create a VPC endpoint and add it to the route table associated with subnets containing consumers.
Configure the bucket policy to allow s3:ListBucket and s3:GetObject actions using the condition
IpAddress and the condition key aws:SourceIp matching the VPC CIDR block.
Answer: B
Amazon Web Services - SAP-C01
Topic 4, Exam Pool D
Question #:7 - (Exam Topic 4)
____________pricing offers significant savings over the normal price of DynamoDB provisioned throughput
capacity.
A. Discount Voucher
B. Reserved Capacity
C. Discount Service
D. Reserved Point
Answer: B
Explanation
Reserved Capacity pricing offers significant savings over the normal price of DynamoDB provisioned
throughput capacity. When you buy Reserved Capacity, you pay a one-time upfront fee and commit to paying
for a minimum usage level, at the hourly rates indicated above, for the duration of the Reserved Capacity term.
http://aws.amazon.com/dynamodb/pricing/
Question #:8 - (Exam Topic 4)
A user has created a VPC with CIDR 20.0.0.0/16. The user has created one subnet with CIDR 20.0.0.0/16 by
mistake. The user is trying to create another subnet of CIDR 20.0.1.0/24.
How can the user create the second subnet?
A. The user can modify the first subnet CIDR with AWS CLI
B. The user can modify the first subnet CIDR from the console
C. There is no need to update the subnet as VPC automatically adjusts the CIDR of the first subnet based
on the second subnet's CIDR
D. It is not possible to create a second subnet with overlapping IP CIDR without deleting the first subnet.
Answer: D
Explanation
A Virtual Private Cloud (VPC) is a virtual network dedicated to the user's AWS account. A user can create a
subnet with VPC and launch instances inside the subnet. The user can create a subnet with the same size of
VPC. However, he cannot create any other subnet since the CIDR of the second subnet will conflict with the
Amazon Web Services - SAP-C01
first subnet. The user cannot modify the CIDR of a subnet once it is created. Thus, in this case if required, the
user has to delete the subnet and create new subnets.
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html
9 of 11
Amazon Web Services - SAP-C01
Topic 3, Exam Pool C
Question #:9 - (Exam Topic 3)
To scale out the AWS resources using manual AutoScaling, which of the below mentioned parameters should
the user change?
A. Current capacity
B. Desired capacity
C. Preferred capacity
D. Maximum capacity
Answer: B
Explanation
The Manual Scaling as part of Auto Scaling allows the user to change the capacity of Auto Scaling group. The
user can add / remove EC2 instances on the fly. To execute manual scaling, the user should modify the desired
capacity. AutoScaling will adjust instances as per the requirements.
http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/as-manual-scaling.html
Question #:10 - (Exam Topic 3)
An EC2 instance that performs source/destination checks by default is launched in a private VPC subnet. All
security, NACL, and routing definitions are configured as expected. A custom NAT instance is launched.
Which of the following must be done for the custom NAT instance to work?
A. The source/destination checks should be disabled on the NAT instance.
B. The NAT instance should be launched in public subnet.
C. The NAT instance should be configured with a public IP address.
D. The NAT instance should be configured with an elastic IP address.
Answer: A
Explanation
Each EC2 instance performs source/destination checks by default. This means that the instance must be the
source or destination of any traffic it sends or receives. However, a NAT instance must be able to send and
receive traffic when the source or destination is not itself. Therefore, you must disable source/destination
checks on the NAT instance.
Amazon Web Services - SAP-C01
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_NAT_Instance.html#EIP_DisableSrcDestCheck
https://www.dumpsforsure.com/amazon/sap-c01-dumps.html
Comments