DOP-C01 | A Review Of Real DOP-C01 Questions Pool

It is impossible to pass Amazon-Web-Services DOP-C01 exam without any help in the short term. Come to Ucertify soon and find the most advanced, correct and guaranteed Amazon-Web-Services DOP-C01 practice questions. You will get a surprising result by our Up to date AWS Certified DevOps Engineer- Professional practice guides.

Also have DOP-C01 free dumps questions for you:

NEW QUESTION 1
You are a Devops Engineer for your company. The company has a number of Cloudformation templates in AWS. There is a concern from the IT Security department and they want to know who all use the Cloudformation stacks in the company's AWS account. Which of the following can be done to take care of this security concern?

  • A. EnableCloudwatch events for each cloudformation stack to track the resource creationevents.
  • B. EnableCloudtrail logs so that the API calls can be recorded
  • C. EnableCloudwatch logs for each cloudformation stack to track the resource creationevents.
  • D. ConnectSQS and Cloudformation so that a message is published for each resource createdin the Cloudformation stack.

Answer: B

Explanation:
This is given as a best practice in the AWS documentation
AWS CloudTrail tracks anyone making AWS Cloud Formation API calls in your AWS account. API calls are logged whenever anyone uses the AWS Cloud Formation API, the AWS Cloud Formation console, a back-end console, or AWS CloudFormation AWS CLI commands.
Enable logging and specify an Amazon S3 bucket to store the logs. That way, if you ever need to, you can audit who made what AWS CloudFormation call in your account
For more information on the best practises, please visit the below URL: http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/best-practices.html

NEW QUESTION 2
You are a Devops Engineer for your company. You are responsible for creating Cloudformation templates for your company. There is a requirement to ensure that an S3 bucket is created for all
resources in development for logging purposes. How would you achieve this?

  • A. Createseparate Cloudformation templates for Development and production.
  • B. Createa parameter in the Cloudformation template and then use the Condition clause inthe template to create an S3 bucket if the parameter has a value of development
  • C. Createan S3 bucket from before and then just provide access based on the tag valuementioned in the Cloudformation template
  • D. Usethe metadata section in the Cloudformation template to decide on whether tocreate the S3 bucket or not.

Answer: B

Explanation:
The AWS Documentation mentions the following
You might use conditions when you want to reuse a template that can create resources in different contexts, such as a test environment versus a production environment In your template, you can add an CnvironmentType input parameter, which accepts either prod or test as inputs. For the production environment, you might include Amazon CC2 instances with certain capabilities; however, for the test environment, you want to use reduced capabilities to save money. With conditions, you can define which resources are created and how they're configured for each environment type.
For more information on Cloudformation conditions please visit the below url http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/cond itions-section- structure.htm I

NEW QUESTION 3
You have a web application composed of an Auto Scaling group of web servers behind a load balancer, and create a new AMI for each application version for deployment. You have a new version to release, and you want to use the A/B deployment technique to migrate users over in a controlled manner while the size of the fleet remains constant over a period of 12 hours, to ensure that the new version is performing well.
What option should you choose to enable this technique while being able to roll back easily?

  • A. Createan Auto scaling launch configuration with the new AM
  • B. Configure the AutoScalinggroup with the new launch configuratio
  • C. Use the Auto Scaling rollingupdates feature to migrate to the new version.
  • D. Createan Auto Scaling launch configuration with the new AM
  • E. Create an Auto Scalinggroup configured to use the new launch configuration and to register instanceswith the same load balance
  • F. Vary the desired capacity of each group tomigrate.
  • G. Createan Auto scaling launch configuration with the new AM
  • H. Configure Auto Scalingto vary the proportion of instances launched from the two launchconfigurations.
  • I. Createa load balance
  • J. Create an Auto Scaling launch configuration with the new AMIto use the new launch configuration and to registerinstances with the new loadbalance
  • K. Use Amazon Route53 weighted Round Robin to vary the proportion ofrequests sent to the load balancers.
  • L. Launchnew instances using the new AMI and attach them to the Auto Scalinggroup.Configure Elastic Load Balancing to vary the proportion of requests sent toinstances running the two application versions.

Answer: D

Explanation:
Since you want to control the usage to the new application in a controlled manner, the best way is to use Route53 weighted method. The AWS documentation
mentions the following on this method
Weighted routing lets you associate multiple resources with a single domain name (example.com) or subdomain name (acme.example.com) and choose how much traffic is routed to each resource. This can be useful for a variety of purposes, including load balancing and testing new versions of software.
For more information on Weighted Round Robin method, please visit the link: http://docs^ws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html/rrouting-policy- weighted

NEW QUESTION 4
You need to create a Route53 record automatically in CloudFormation when not running in production during all launches of a Template. How should you implement this?

  • A. Use a Parameter for environment, and add a Condition on the Route53 Resource in the template to create the record only when environment is not production.
  • B. Create two templates, one with the Route53 record value and one with a null value for the recor
  • C. Use the one without it when deploying to production.
  • D. Use a Parameterfor environment, and add a Condition on the Route53 Resource in the template to create the record with a null string when environment is production.
  • E. Create two templates, one with the Route53 record and one without i
  • F. Use the one without it when deploying to production.

Answer: A

Explanation:
The optional Conditions section includes statements that define when a resource is created or when a property is defined. For example, you can compare whether a value is equal to another value. Based on the result of that condition, you can conditionally create resources. If you have multiple conditions, separate them with commas.
You might use conditions when you want to reuse a template that can create resources in different contexts, such as a test environment versus a production environment In your template, you can add an Environ me ntType input parameter, which accepts either prod or test as inputs. For the production environment, you might include Amazon CC2 instances with certain capabilities; however, for the test environment, you want to use reduced capabilities to save money. With conditions, you can define which resources are created and how they're configured for each environment type.
For more information on Cloudformation conditions please refer to the below link: http://docs.ws.amazon.com/AWSCIoudFormation/latest/UserGuide/cond itions-section- structure.htm I

NEW QUESTION 5
Your company has an e-commerce platform which is expanding all over the globe, you have EC2 instances deployed in multiple regions you want to monitor performance of all of these EC2 instances. How will you setup CloudWatch to monitor EC2 instances in multiple regions?

  • A. Createseparate dashboards in every region
  • B. Register!nstances running on different regions to CloudWatch
  • C. Haveone single dashboard to report metrics to CloudWatch from different region
  • D. Thisis not possible

Answer: C

Explanation:
You can monitor AWS resources in multiple regions using a single Cloud Watch dashboard. For example, you can create a dashboard that shows CPU utilization for an
CC2 instance located in the us-west-2 region with your billing metrics, which are located in the us- east-1 region.
For more information on Cloudwatch dashboard, please refer to the below url http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cross_region_dashboard.html

NEW QUESTION 6
Your company is getting ready to do a major public announcement of a social media site on AWS. The website is running on EC2 instances deployed across multiple Availability Zones with a Multi-AZ RDS MySQL Extra Large DB Instance. The site performs a high number of small reads and writes per second and relies on an eventual consistency model. After comprehensive tests you discover that there is read contention on RDS MySQL. Which are the best approaches to meet these requirements? Choose 2 answers from the options below

  • A. Deploy ElasticCache in-memory cache running in each availability zone
  • B. Implement sharding to distribute load to multiple RDS MySQL instances
  • C. Increase the RDS MySQL Instance size and Implement provisioned IOPS
  • D. Add an RDS MySQL read replica in each availability zone

Answer: AD

Explanation:
Implement Read Replicas and Clastic Cache
Amazon RDS Read Replicas provide enhanced performance and durability for database (DB) instances. This replication feature makes it easy to elastically scale out beyond the capacity constraints of a single DB Instance for read-heavy database workloads. You can create one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput.
For more information on Read Replica's, please visit the below link:
• https://aws.amazon.com/rds/details/read-replicas/
Amazon OastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory data store or cache in the cloud. The service improves the performance of web applications by allowing you to retrieve information from fast, managed, in- memory data stores, instead of relying entirely on slower disk-based databases.
For more information on Amazon OastiCache, please visit the below link:
• https://aws.amazon.com/elasticache/

NEW QUESTION 7
You are responsible for an application that leverages the Amazon SDK and Amazon EC2 roles for storing and retrieving data from Amazon S3, accessing multiple DynamoDB tables, and exchanging message with Amazon SQS queues. Your VP of Compliance is concerned that you are not following security best practices for securing all of this access. He has asked you to verify that the application's AWS access keys are not older than six months and to provide control evidence that these keys will be rotated a minimum of once every six months.
Which option will provide your VP with the requested information?

  • A. Createa script to query the 1AM list-access keys API to get your application accesskey creation date and create a batch process to periodically create acompliance report for your VP.
  • B. Provideyour VP with a link to 1AM AWS documentation to address the VP's key rotationconcerns.
  • C. Updateyour application to log changes to its AWS access key credential file and use aperiodic Amazon EMR job to create a compliance report for your VP
  • D. Createa new set of instructions for your configuration management tool that willperiodically create and rotate the application's existing access keys andprovide a compliance report to your VP.

Answer: B

Explanation:
The question is focusing on 1AM roles rather than using access keys for accessing the services, AWS will take care of the temporary credentials provided through the roles in accessing these services.

NEW QUESTION 8
You are using Elastic beanstalk to deploy an application that consists of a web and application server. There is a requirement to run some python scripts before the application version is deployed to the web server. Which of the following can be used to achieve this?

  • A. Makeuse of container commands
  • B. Makeuse of Docker containers
  • C. Makeuse of custom resources
  • D. Makeuse of multiple elastic beanstalk environments

Answer: A

Explanation:
The AWS Documentation mentions the following
You can use the container_commands key to execute commands that affect your application source code. Container commands run after the application and web
server have been set up and the application version archive has been extracted, but before the application version is deployed. Non-container commands and other
customization operations are performed prior to the application source code being extracted. For more information on Container commands, please visit the below URL: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.htmI

NEW QUESTION 9
Your finance supervisor has set a budget of 2000 USD for the resources in AWS. Which of the
following is the simplest way to ensure that you know when this threshold is being reached.

  • A. Use Cloudwatch events to notify you when you reach the threshold value
  • B. Use the Cloudwatch billing alarm to to notify you when you reach the threshold value
  • C. Use Cloudwatch logs to notify you when you reach the threshold value
  • D. Use SQS queues to notify you when you reach the threshold value

Answer: B

Explanation:
The AWS documentation mentions
You can monitor your AWS costs by using Cloud Watch. With Cloud Watch, you can create billing alerts that notify you when your usage of your services exceeds
thresholds that you define. You specify these threshold amounts when you create the billing alerts.
When your usage exceeds these amounts, AWS sends you an
email notification. You can also sign up to receive notifications when AWS prices change. For more information on billing alarms, please refer to the below URL:
• http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/mon itor-charges.html

NEW QUESTION 10
There is a company website that is going to be launched in the coming weeks. There is a probability that the traffic will be quite high in the first couple of weeks. I n the event of a load failure, how can you set up DNS failover to a static website? Choose the correct answer from the options given below.

  • A. Duplicatethe exact application architecture in another region and configure DNSweight-based routing
  • B. Enablefailover to an on-premise data center to the application hosted there.
  • C. UseRoute 53 with the failover option to failover to a static S3 website bucket orCloudFront distribution.
  • D. Addmore servers in case the application fails.

Answer: C

Explanation:
Amazon Route 53 health checks monitor the health and performance of your web applications, web servers, and other resources.
If you have multiple resources that perform the same function, you can configure DNS failover so that Amazon Route 53 will route your traffic from an unhealthy resource to a healthy resource. For example, if you have two web servers and one web server becomes unhealthy, Amazon Route 53 can route traffic to the other web server. So you can route traffic to a website hosted on S3 or to a cloudfront distribution.
For more information on DNS failover using Route53, please refer to the below link:
• http://docs.aws.a mazon.com/Route53/latest/DeveloperGuide/dns-fa ilover.htm I

NEW QUESTION 11
Your company releases new features with high frequency while demanding high application availability. As part of the application's A/B testing, logs from each updated Amazon EC2 instance of the application need to be analyzed in near real-time, to ensure that the application is working
flawlessly after each deployment. If the logs show any anomalous behavior, then the application version of the instance is changed to a more stable one. Which of the following methods should you use for shipping and analyzing the logs in a highly available manner?

  • A. Ship the logs to Amazon S3 for durability and use Amazon EMR to analyze the logs in a batch manner each hour.
  • B. Ship the logs to Amazon CloudWatch Logs and use Amazon EMR to analyze the logs in a batch manner each hour.
  • C. Ship the logs to an Amazon Kinesis stream and have the consumers analyze the logs in a live manner.
  • D. Ship the logs to a large Amazon EC2 instance and analyze the logs in a live manner.

Answer: C

Explanation:
Answer - C
You can use Kinesis Streams for rapid and continuous data intake and aggregation. The type of data used includes IT infrastructure log data, application logs, social media, market data feeds, and web clickstream data. Because the response time for the data intake and processing is in real time, the processing is typically lightweight.
The following are typical scenarios for using Kinesis Streams:
• Accelerated log and data feed intake and processing - You can have producers push data directly into a stream. For example, push system and application logs and they'll be available for processing in seconds. This prevents the log data from being lost if the front end or application server fails. Kinesis Streams provides accelerated data feed intake because you don't batch the data on the servers before you submit it for intake.
• Real-time metrics and reporting - You can use data collected into Kinesis Streams for simple data analysis and reporting in real time. For example, your data-processing application can work on metrics and reporting for system and application logs as the data is streaming in, rather than wait to receive batches of data.
For more information on Amazon Kinesis and SNS please refer to the below link:
• http://docs.aws.a mazon.com/streams/latest/dev/introduction.html

NEW QUESTION 12
You have just been assigned to take care of the Automated resources which have been setup by your company in AWS. You are looking at integrating some of the company's chef recipes to be used for the existing Opswork stacks already setup in AWS. But when you go to the recipes section, you cannot see the option to add any recipes. What could be the reason for this?

  • A. Onceyou create a stack, you cannot assign custom recipe's, this needs to be donewhen the stack is created.
  • B. Onceyou create layers in the stack, you cannot assign custom recipe's, this needsto be done when the layers are created.
  • C. Thestack layers were created without the custom cookbooks optio
  • D. Just change thelayer settings accordingly.
  • E. Thestacks were created without the custom cookbooks optio
  • F. Just change the stacksettings accordingly.

Answer: D

Explanation:
The AWS Documentation mentions the below
To have a stack install and use custom cookbooks, you must configure the stack to enable custom cookbooks, if it is not already configured. You must then provide the repository URL and any related information such as a password.
For more information on Custom cookbooks for Opswork, please visit the below URL:
• http://docs.aws.amazon.com/opsworks/latest/userguide/workingcookbook-i nstallingcustom- enable.htm I

NEW QUESTION 13
There is a requirement for a vendor to have access to an S3 bucket in your account. The vendor already has an AWS account. How can you provide access to the vendor on this bucket.

  • A. Create a new 1AM user and grant the relevant access to the vendor on that bucket.
  • B. Create a new 1AM group and grant the relevant access to the vendor on that bucket.
  • C. Create a cross-account role for the vendor account and grant that role access to the S3 bucket.
  • D. Create an S3 bucket policy that allows the vendor to read from the bucket from their AWS account.

Answer: C

Explanation:
The AWS documentation mentions
You share resources in one account with users in a different account. By setting up cross-account access in this way, you don't need to create individual 1AM users in each account In addition, users don't have to sign out of one account and sign into another in order to access resources that are in different AWS accounts. After configuring the role, you see how to use the role from the AWS Management Console, the AWS CLI, and the API
For more information on Cross Account Roles Access, please refer to the below link:
• http://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.html

NEW QUESTION 14
You are in charge of designing a number of Cloudformation templates for your organization. You need to ensure that no one can accidentally update the production based resources on the stack during a stack update. How can this be achieved in the most efficient way?

  • A. Createtags for the resources and then create 1AM policies to protect the resources.
  • B. Usea Stack based policy to protect the production based resources.
  • C. UseS3 bucket policies to protect the resources.
  • D. UseMFA to protect the resources

Answer: B

Explanation:
The AWS Documentation mentions
When you create a stack, all update actions are allowed on all resources. By default, anyone with stack update permissions can update all of the resources in the stack. During an update, some resources might require an interruption or be completely replaced, resulting in new physical IDs or completely new storage. You can prevent stack resources from being unintentionally updated or deleted during a stack update by using a stack policy. A stack policy is a JSON document that defines the update action1.-; that car1 be performed on designated resources.
For more information on protecting stack resources, please visit the below url http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/protect-stack-resources.html

NEW QUESTION 15
When an Auto Scaling group is running in Amazon Elastic Compute Cloud (EC2), your application rapidly scales up and down in response to load within a 10-minute window; however, after the load peaks, you begin to see problems in your configuration management system where previously terminated Amazon EC2 resources are still showing as active. What would be a reliable and efficient way to handle the cleanup of Amazon EC2 resources within your configuration management system? Choose two answers from the options given below

  • A. Write a script that is run by a daily cron job on an Amazon EC2 instance and that executes API Describe calls of the EC2 Auto Scalinggroup and removes terminated instances from the configuration management system.
  • B. Configure an Amazon Simple Queue Service (SQS) queue for Auto Scaling actions that has a script that listens for new messages and removes terminated instances from the configuration management system.
  • C. Use your existing configuration management system to control the launchingand bootstrapping of instances to reduce the number of moving parts in the automation.
  • D. Write a small script that is run during Amazon EC2 instance shutdown to de-register the resource from the configuration management system.

Answer: AD

Explanation:
There is a rich brand of CLI commands available for Cc2 Instances. The CLI is located in the following link:
• http://docs.aws.a mazon.com/cli/latest/reference/ec2/
You can then use the describe instances command to describe the EC2 instances.
If you specify one or more instance I Ds, Amazon CC2 returns information for those instances. If you do not specify instance IDs, Amazon EC2 returns information for all relevant instances. If you specify an instance ID that is not valid, an error is returned. If you specify an instance that you do not own, it is not included in the returned results.
• http://docs.aws.a mazon.com/cli/latest/reference/ec2/describe-insta nces.html
You can use the CC2 instances to get those instances which need to be removed from the configuration management system.

NEW QUESTION 16
You need to deploy an AWS stack in a repeatable manner across multiple environments. You have selected CloudFormation as the right tool to accomplish this, but have found that there is a resource type you need to create and model, but is unsupported by CloudFormation. How should you overcome this challenge?

  • A. Use a CloudFormation Custom Resource Template by selecting an API call to proxy for create, update, and delete action
  • B. CloudFormation will use the AWS SDK, CLI, or API method of your choosing as the state transition function for the resource type you are modeling.
  • C. Submit a ticket to the AWS Forum
  • D. AWS extends CloudFormation Resource Types by releasing tooling to the AWS Labs organization on GitHu
  • E. Their response time is usually 1 day, and theycomplete requests within a week or two.
  • F. Instead of depending on CloudFormation, use Chef, Puppet, or Ansible to author Heat templates, which are declarative stack resource definitions that operate over the OpenStack hypervisor and cloud environment.
  • G. Create a CloudFormation Custom Resource Type by implementing create, update, and delete functionality, either by subscribing a Custom Resource Provider to an SNS topic, or by implementing the logic in AWS Lambda.

Answer: D

Explanation:
Custom resources enable you to write custom provisioning logic in templates that AWS Cloud Formation runs anytime you create, update (if you changed the custom resource), or delete stacks. For example, you might want to include resources that aren't available as AWS Cloud Formation resource types. You can include those resources by using custom resources. That way you can still manage all your related resources in a single stack.
Use the AWS:: Cloud Formation:: Custom Resource or Custom ::String resource type to define custom resources in your templates. Custom resources require one property: the service token, which specifies where AWS CloudFormation sends requests to, such as an Amazon SNS topic.
For more information on Custom Resources in Cloudformation, please visit the below U RL: http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/template-custom- resources.html

NEW QUESTION 17
Your company uses AWS to host its resources. They have the following requirements
1) Record all API calls and Transitions
2) Help in understanding what resources are there in the account
3) Facility to allow auditing credentials and logins
Which services would suffice the above requirements

  • A. AWS Config, CloudTrail, 1AM Credential Reports
  • B. CloudTrail, 1AM Credential Reports, AWS Config
  • C. CloudTrail, AWS Config, 1AM Credential Reports
  • D. AWS Config, 1AM Credential Reports, CloudTrail

Answer: C

Explanation:
You can use AWS CloudTrail to get a history of AWS API calls and related events for your account. This history includes calls made with the AWS Management
Console, AWS Command Line Interface, AWS SDKs, and other AWS services. For more information on Cloudtrail, please visit the below URL:
• http://docs.aws.a mazon.com/awscloudtrail/latest/userguide/cloudtrai l-user-guide.html
AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. With Config, you can review changes in configurations and relationships between AWS resources, dive into detailed resource configuration histories, and determine your overall compliance against the configurations specified in your internal guidelines. This enables you to simplify compliance auditing, security analysis, change management, and operational troubleshooting. For more information on the config service, please visit the below URL:
• https://aws.amazon.com/config/
You can generate and download a credential reportthat lists all users in your account and the status of their various credentials, including passwords, access keys, and MFA devices. You can get a credential report from the AWS Management Console, the AWS SDKs and Command Line Tools, or the 1AM API. For more information on Credentials Report, please visit the below URL:
• http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_getting-report.html

NEW QUESTION 18
You have an ELB setup in AWS with EC2 instances running behind it. You have been requested to monitor the incoming connections to the ELB. Which of the below options can suffice this requirement?

  • A. UseAWSCIoudTrail with your load balancer
  • B. Enable access logs on the load balancer
  • C. Use a CloudWatch Logs Agent
  • D. Create a custom metric CloudWatch filter on your load balancer

Answer: B

Explanation:
Clastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer. Cach log contains information such as the
time the request was received, the client's IP address, latencies, request paths, and server responses.
You can use these access logs to analyze traffic patterns and to troubleshoot issues.
Option A is invalid because this service will monitor all AWS services Option C and D are invalid since CLB already provides a logging feature.
For more information on ELB access logs, please refer to the below document link: from AWS http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/access-log-collection.html

NEW QUESTION 19
You have an application consisting of a stateless web server tier running on Amazon EC2 instances behind load balancer, and are using Amazon RDS with read replicas. Which of the following methods should you use to implement a self-healing and cost-effective architecture? Choose 2 answers from the optionsgiven below

  • A. Set up a third-party monitoring solution on a cluster of Amazon EC2 instances in order to emit custom Cloud Watch metrics to trigger the termination of unhealthy Amazon EC2 instances.
  • B. Set up scripts on each Amazon EC2 instance to frequently send ICMP pings to the load balancer in order to determine which instance is unhealthy and replace it.
  • C. Set up an Auto Scalinggroup for the web server tier along with an Auto Scaling policy that uses the Amazon RDS DB CPU utilization Cloud Watch metric to scale the instances.
  • D. Set up an Auto Scalinggroup for the web server tier along with an Auto Scaling policy that uses the Amazon EC2 CPU utilization CloudWatch metric to scale the instances.
  • E. Use a larger Amazon EC2 instance type for the web server tier and a larger DB instance type for the data storage layer to ensure that they don't become unhealthy.
  • F. Set up an Auto Scalinggroup for the database tier along with an Auto Scaling policy that uses the Amazon RDS read replica lag CloudWatch metric to scale out the Amazon RDS read replicas.
  • G. Use an Amazon RDS Multi-AZ deployment.

Answer: DG

Explanation:
The scaling of CC2 Instances in the Autoscaling group is normally done with the metric of the CPU utilization of the current instances in the Autoscaling group
For more information on scaling in your Autoscaling Group, please refer to the below link:
• http://docs.aws.amazon.com/autoscaling/latest/userguide/as-scaling-simple-step.html
Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a Multi- AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Cach AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby (or to a read replica in the case of Amazon Aurora), so that you can resume database operations as soon as the failover is complete. For more information on RDS Multi-AZ please refer to the below link: https://aws.amazon.com/rds/details/multi-az/
Option A is invalid because if you already have in-built metrics from Cloudwatch, why would you want to spend more in using a a third-party monitoring solution.
Option B is invalid because health checks are already a feature of AWS CLB
Option C is invalid because the database CPU usage should not be used to scale the web tier.
Option C is invalid because increasing the instance size does not always guarantee that the solution will not become unhealthy.
Option F is invalid because increasing Read-Replica's will not suffice for write operations if the primary DB fails.

NEW QUESTION 20
You are creating a new API for video game scores. Reads are 100 times more common than writes, and the top 1% of scores are read 100 times more frequently than the rest of the scores. What's the best design for this system, using DynamoDB?

  • A. DynamoDB table with 100x higher read than write throughput, with CloudFront caching.
  • B. DynamoDB table with roughly equal read and write throughput, with CloudFront caching.
  • C. DynamoDB table with 100x higher read than write throughput, with ElastiCache caching.
  • D. DynamoDB table with roughly equal read and write throughput, with ElastiCache caching.

Answer: D

Explanation:
Because the lOOx read ratio is mostly driven by a small subset, with caching, only a roughly equal number of reads to writes will miss the cache, since the supermajority will hit the top 1% scores. Knowing we need to set the values roughly equal when using caching, we select AWS OastiCache, because CloudFront
cannot directly cache DynamoDB queries, and OastiCache is an excellent in-memory cache for database queries, rather than a distributed proxy cache for content delivery.
For more information on DynamoDB table gudelines please refer to the below link:
• http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GuidelinesForTables.html

NEW QUESTION 21
You need to scale an RDS deployment. You are operating at 10% writes and 90% reads, based on your logging. How best can you scale this in a simple way?

  • A. Create a second master RDS instance and peer the RDS groups.
  • B. Cache all the database responses on the read side with CloudFront.
  • C. Create read replicas for RDS since the load is mostly reads.
  • D. Create a Multi-AZ RDS installs and route read traffic to standby.

Answer: C

Explanation:
Amazon RDS Read Replicas provide enhanced performance and durability for database (DB) instances. This replication feature makes it easy to elastically scale out beyond the capacity constraints of a single DB Instance for read-heavy database workloads. You can create one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput. Read replicas can also be promoted when needed to become standalone DB instances.
Option A is invalid because you would need to maintain the synchronization yourself with a secondary instance.
Option B is invalid because you are introducing another layer unnecessarily when you already have read replica's Option D is invalid because you only use this for Standy's
For more information on Read Replica's, please refer to the below link: https://aws.amazon.com/rds/details/read-replicas/

NEW QUESTION 22
......

Thanks for reading the newest DOP-C01 exam dumps! We recommend you to try the PREMIUM DumpSolutions.com DOP-C01 dumps in VCE and PDF here: https://www.dumpsolutions.com/DOP-C01-dumps/ (116 Q&As Dumps)