DOP-C01 | A Review Of Guaranteed DOP-C01 Vce

Our pass rate is high to 98.9% and the similarity percentage between our DOP-C01 study guide and real exam is 90% based on our seven-year educating experience. Do you want achievements in the Amazon-Web-Services DOP-C01 exam in just one try? I am currently studying for the Amazon-Web-Services DOP-C01 exam. Latest Amazon-Web-Services DOP-C01 Test exam practice questions and answers, Try Amazon-Web-Services DOP-C01 Brain Dumps First.

Also have DOP-C01 free dumps questions for you:

NEW QUESTION 1
When you add lifecycle hooks to an Autoscaling Group, what are the wait states that occur during the scale in and scale out process. Choose 2 answers from the options given below

  • A. Launching:Wait
  • B. Exiting:Wait
  • C. Pending:Wait
  • D. Terminating:Wait

Answer: CD

Explanation:
The AWS Documentation mentions the following
After you add lifecycle hooks to your Auto Scaling group, they work as follows:
1. Auto Scaling responds to scale out events by launching instances and scale in events by terminating instances.
2. Auto Scaling puts the instance into a wait state (Pending:Wait orTerminating: Wait). The instance is paused until either you tell Auto Scaling to continue or the timeout period ends.
For more information on Autoscaling Lifecycle hooks, please visit the below URL: • http://docs.aws.amazon.com/autoscaling/latest/userguide/lifecycle-hooks.htmI

NEW QUESTION 2
You have an application running in us-west-2 that requires 6 EC2 instances running at all times. With 3 AZ available in that region, which of the following deployments provides 100% fault tolerance if any single AZ in us-west-2 becomes unavailable. Choose 2 answers from the options below

  • A. us-west-2awith 2 instances, us-west-2b with 2 instances, us-west-2c with 2 instances
  • B. us-west-2awith 3 instances, us-west-2b with 3 instances, us-west-2c with 0 instances
  • C. us-west-2awith 4 instances, us-west-2b with 2 instances, us-west-2c with 2 instances
  • D. us-west-2awith 6 instances, us-west-2b with 6 instances, us-west-2c with 0 instances
  • E. us-west-2awith 3 instances, us-west-2b with 3 instances, us-west-2c with 3 instances

Answer: DE

Explanation:
Since we need 6 instances running at all times, only D and C fulfil this option. The AWS documentation mentions the following on Availability zones
When you launch an instance, you can select an Availability Zone or let us choose one for you. If you distribute your instances across multiple Availability Zones and one instance fails, you can design your application so that an instance in another Availability Zone can handle requests.
For more information on Regions and AZ's please visit the URL:
• http://docs.aws.amazon.com/AWSCC2/latest/UserGuide/using-regions-avai lability-zones.htm I

NEW QUESTION 3
Which of the following is the default deployment mechanism used by Elastic Beanstalk when the application is created via Console or EBCLI?

  • A. All at Once
  • B. Rolling Deployments
  • C. Rolling with additional batch
  • D. Immutable

Answer: B

Explanation:
The AWS documentation mentions
AWS Elastic Beanstalk provides several options for how deployments are processed, including deployment policies (All at once. Rolling, Rolling with additional batch,
and Immutable) and options that let you configure batch size and health check behavior during deployments. By default, your environment uses rolling deployments
if you created it with the console or EB CLI, or all at once deployments if you created it with a different client (API, SDK or AWS CLI).
For more information on Elastic Beanstalk deployments, please refer to the below link:
• http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.rolling-version- deploy.html

NEW QUESTION 4
An organization is planning to use AWS for their Production Rollout. The organizations wants to implement automation for deployment, such that it will automatically create a LAMP stack, deploy an RDS MySQLDB instance, download the latest PHP installable from S3 and set up the ELB. Which of the below mentioned AWS services meets the requirement for making an orderly deployment of the software?

  • A. AWS Elastic Beanstalk
  • B. AWSCIoudfront
  • C. AWS Cloudformation
  • D. AWS DevOps

Answer: C

Explanation:
When you want to automate deployment, the automatic choice is Cloudformation. Below is the excerpt from AWS on cloudformation.
AWS Cloud Formation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning updating them in an orderly and predictable fashion.
You can use AWS Cloud Formation's sample templates or create your own templates to describe the AWS resources, and any associated dependencies or runtime parameters, required to run your application. You don't need to figure out the order for provisioning AWS services or the subtleties of making those dependencies work. Cloud Formation takes care of this for you. After the AWS resources are deployed, you can modify and update them in a controlled and predictable way, in effect applying version control to your AWS infrastructure the same way you do with your software For more information on Cloud Formation, please visit the link:
• https://aws.amazon.com/cloudformation/ As per AWS,
"AWS Clastic Beanstalk provides support for running Amazon Relational Database Service (Amazon RDS) instances in your Clastic Beanstalk environment. This works great for development and testing environments. However, it isn't ideal for a production environment because it ties the lifecycle of the database instance to the lifecycleofyour application's environment."
• https://docs.aws.a mazon.com/elasticbeanstalk/latesWdg/AW SHowTo.RDS.htm I

NEW QUESTION 5
What are the benefits when you implement a Blue Green deployment for your infrastructure or application level changes. Choose 3 answers from the options given below

  • A. Nearzero-downtime release for new changes
  • B. Betterrollback capabilities
  • C. Abilityto deploy with higher risk
  • D. Goodturnaround time for application deployments

Answer: ABD

Explanation:
The AWS Documentation mentions the following
Blue/green deployments provide near zero-downtime release and rollback capabilities. The fundamental idea behind blue/green deployment is to shift traffic between two identical environments that are running different versions of your application. The blue environment represents the current application version serving production traffic. In parallel, the green environment is staged running a different version of your application. After the green environment is ready and tested, production traffic is redirected from blue to green.
For more information on Blue Green deployments please see the below link:
• https://dOawsstatic.com/whitepapers/AWS_Blue_Green_Deployments.pdf

NEW QUESTION 6
Your firm has uploaded a large amount of aerial image data to S3. In the past, in your on-premises environment, you used a dedicated group of servers to process this data and used Rabbit MQ - An open source messaging system to get job information to the servers. Once processed the data would go to tape and be shipped offsite. Your manager told you to stay with the current design, and leverage AWS archival storage and messaging services to minimize cost. Which is correct?

  • A. UseSQS for passing job message
  • B. Use Cloud Watch alarms to terminate EC2 workerinstances when they become idl
  • C. Once data is processed, change the storageclass of the S3 objects to Reduced Redundancy Storage.
  • D. SetupAuto-Scaled workers triggered by queue depth that use spot instances to processmessages in SQ
  • E. Once data is processed, change the storage class of the S3objects to Glacier
  • F. Changethe storage class of the S3 objects to Reduced Redundancy Storag
  • G. SetupAuto-Scaled workers triggered by queue depth that use spot instances to processmessages in SQ
  • H. Once data is processed, change the storage class of the S3objects to Glacier.
  • I. Use SNS topassjob messages use Cloud Watch alarms to terminate spot worker instanceswhen they become idl
  • J. Once data is processed, change the storage class of theS3 object to Glacier.

Answer: B

Explanation:
The best option for reduces costs is Glacier, since anyway in the on-premise location everything was stored on tape. Hence option A is out.
Next SQS should be used, since RabbitMG was used internally. Hence option D is out.
The first step is to leave the objects in S3 and not tamper with that. Hence option B is more suited. The following diagram shows how SQS is used in a worker span environment
DOP-C01 dumps exhibit
For more information on SQS queues, please visit the below URL
<http://docs.ws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs- how-it- works.html>

NEW QUESTION 7
You need to deploy a multi-container Docker environment on to Elastic beanstalk. Which of the following files can be used to deploy a set of Docker containers to Elastic beanstalk

  • A. Dockerfile
  • B. DockerMultifile
  • C. Dockerrun.aws.json
  • D. Dockerrun

Answer: C

Explanation:
The AWS Documentation specifies
A Dockerrun.aws.json file is an Clastic Beanstalk-specific JSON file that describes how to deploy a set of Docker containers as an Clastic Beanstalk application. You can use aDockerrun.aws.json file for a multicontainer Docker environment.
Dockerrun.aws.json describes the containers to deploy to each container instance in the environment as well as the data volumes to create on the host instance for the containers to mount.
For more information on this, please visit the below URL:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_v2config.html

NEW QUESTION 8
You've been tasked with improving the current deployment process by making it easier to deploy and reducing the time it takes. You have been tasked with creating a continuous integration (CI) pipeline that can build AMI'S. Which of the below is the best manner to get this done. Assume that at max your development team will be deploying builds 5 times a week.

  • A. Use a dedicated EC2 instance with an EBS Volum
  • B. Download and configure the code and then crate an AMI out of that.
  • C. Use OpsWorks to launch an EBS-backed instance, then use a recipe to bootstrap the instance, and then have the CI system use the Createlmage API call to make an AMI from it.
  • D. Upload the code and dependencies to Amazon S3, launch an instance, download the package fromAmazon S3, then create the AMI with the CreateSnapshot API call
  • E. Have the CI system launch a new instance, then bootstrap the code and dependencies on that instance, and create an AMI using the Createlmage API call.

Answer: D

Explanation:
Since the number of calls is just a few times a week, there are many open source systems such as Jenkins which can be used as CI based systems.
Jenkins can be used as an extensible automation server, Jenkins can be used as a simple CI server or turned into the continuous delivery hub for any project.
For more information on the Jenkins CI tool please refer to the below link:
• https://jenkins.io/
Option A and C are partially correct, but since you just have 5 deployments per week, having separate instances which consume costs is not required. Option B is partially correct, but again having a separate system such as Opswork for such a low number of deployments is not required.

NEW QUESTION 9
You have instances running on your VPC. You have both production and development based instances running in the VPC. You want to ensure that people who are responsible for the development instances don't have the access to work on the production instances to ensure better security. Using policies, which of the following would be the best way to accomplish this? Choose the correct answer from the options given below

  • A. Launchthe test and production instances in separate VPC's and use VPC peering
  • B. Createan 1AM policy with a condition which allows access to only instances that areused for production or development
  • C. Launchthe test and production instances in different Availability Zones and use MultiFactor Authentication
  • D. Definethe tags on the test and production servers and add a condition to the lAMpolicy which allows access to specific tags

Answer: D

Explanation:
You can easily add tags which define which instances are production and which are development instances and then ensure these tags are used when controlling access via an 1AM policy.
For more information on tagging your resources, please refer to the below link: http://docs.aws.amazon.com/AWSCC2/latest/UserGuide/Using_Tags.html

NEW QUESTION 10
You work for a startup that has developed a new photo-sharing application for mobile devices. Over recent months your application has increased in popularity; this has resulted in a decrease in the performance of the application clue to the increased load. Your application has a two-tier architecture that is composed of an Auto Scaling PHP application tier and a MySQL RDS instance initially deployed with AWS Cloud Formation. Your Auto Scaling group has a min value of 4 and a max value of 8. The desired capacity is now at 8 because of the high CPU utilization of the instances. After some analysis, you are confident that the performance issues stem from a constraint in CPU capacity, although memory utilization remains low. You therefore decide to move from the general-purpose M3 instances to the compute-optimized C3 instances. How would you deploy this change while minimizing any interruption to your end users?

  • A. Sign into the AWS Management Console, copy the old launch configuration, and create a new launch configuration that specifies the C3 instance
  • B. Update the Auto Scalinggroup with the new launch configuratio
  • C. Auto Scaling will then update the instance type of all running instances.
  • D. Sign into the AWS Management Console, and update the existing launch configuration with the new C3 instance typ
  • E. Add an UpdatePolicy attribute to your Auto Scaling group that specifies AutoScalingRollingUpdate.
  • F. Update the launch configuration specified in the AWS CloudFormation template with the new C3 instance typ
  • G. Run a stack update with the new templat
  • H. Auto Scaling will then update the instances with the new instance type.
  • I. Update the launch configuration specified in the AWS CloudFormation template with the new C3instance typ
  • J. Also add an UpdatePolicy attribute to your Auto Scalinggroup that specifies AutoScalingRollingUpdat
  • K. Run a stack update with the new template.

Answer: D

Explanation:
The AWS::AutoScaling::AutoScalingGroup resource supports an UpdatePoIicy attribute. This is used to define how an Auto Scalinggroup resource is updated when an update to the Cloud Formation stack occurs. A common approach to updating an Auto Scaling group is to perform a rolling update, which is done by specifying the AutoScalingRollingUpdate policy. This retains the same Auto Scaling group and replaces old instances with new ones, according to the parameters specified. For more information on rolling updates, please visit the below link:
• https://aws.amazon.com/premiumsupport/knowledge-center/auto-scaling-group-rolling- updates/

NEW QUESTION 11
You are a Devops Engineer for your company. You are in charge of an application that uses EC2, ELB and Autoscaling. You have been requested to get the ELB access logs. When you try to access the logs, you can see that nothing has been recorded in S3. Why is this the case?

  • A. Youdon't have the necessary access to the logs generated by ELB.
  • B. Bydefault ELB access logs are disabled.
  • C. TheAutoscaling service is not sending the required logs to ELB
  • D. TheEC2 Instances are not sending the required logs to ELB

Answer: B

Explanation:
The AWS Documentation mentions
Access logging is an optional feature of Elastic Load Balancing that is disabled by default. After you enable access logging for your load balancer. Clastic Load
Balancing captures the logs and stores them in the Amazon S3 bucket that you specify. You can disable access logging at any time.
For more information on L~LB access logs please see the below link:
• http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/access-log-collection.html

NEW QUESTION 12
You have a web application hosted on EC2 instances. There are application changes which happen to the web application on a quarterly basis. Which of the following are example of Blue Green deployments which can be applied to the application? Choose 2 answers from the options given below

  • A. Deploythe application to an elastic beanstalk environmen
  • B. Have a secondary elasticbeanstalk environment in place with the updated application cod
  • C. Use the swapURL's feature to switch onto the new environment.
  • D. Placethe EC2 instances behind an EL
  • E. Have a secondary environment with EC2lnstances and ELB in another regio
  • F. Use Route53 with geo-location to routerequests and switch over to the secondary environment.
  • G. Deploythe application using Opswork stack
  • H. Have a secondary stack for the newapplication deploymen
  • I. Use Route53 to switch over to the new stack for the newapplication update.
  • J. Deploythe application to an elastic beanstalk environmen
  • K. Use the Rolling updatesfeature to perform a Blue Green deployment.

Answer: AC

Explanation:
The AWS Documentation mentions the following
AWS Elastic Beanstalk is a fast and simple way to get an application up and running on AWS.6 It's perfect for developers who want to deploy code without worrying about managing the underlying infrastructure. Elastic Beanstalk supports Auto Scaling and Elastic Load Balancing, both of which enable blue/green deployment.
Elastic Beanstalk makes it easy to run multiple versions of your application and provides capabilities to swap the environment URLs, facilitating blue/green deployment.
AWS OpsWorks is a configuration management service based on Chef that allows customers to deploy and manage application stacks on AWS.7 Customers can specify resource and application configuration, and deploy and monitor running resources. OpsWorks simplifies cloning entire stacks when you're preparing blue/green environments.
For more information on Blue Green deployments, please refer to the below link:
• https://dO3wsstatic.com/whitepapers/AWS_Blue_Green_Deployments.pdf

NEW QUESTION 13
You are designing an application that contains protected health information. Security and compliance requirements for your application mandate that all protected health information in the application use encryption at rest and in transit. The application uses a three-tier architecture where data flows through the load balancer and is stored on Amazon EBS volumes for processing and the results are stored in Amazon S3 using the AWS SDK.
Which of the following two options satisfy the security requirements? (Select two)

  • A. UseSSL termination on the load balancer, Amazon EBS encryption on Amazon EC2instances and Amazon S3 with server- side encryption.
  • B. UseSSL termination with a SAN SSL certificate on the load balance
  • C. Amazon EC2with all Amazon EBS volumes using Amazon EBS encryption, and Amazon S3 withserver-side encryption with customer-managed keys.
  • D. UseTCP load balancing on the load balance
  • E. SSL termination on the Amazon EC2instance
  • F. OS- level disk encryption on the Amazon EBS volumes and Amazon S3with server-side encryption.
  • G. UseTCP load balancing on the load balance
  • H. SSL termination on the Amazon EC2instances and Amazon S3 with server-side encryption.
  • I. UseSSL termination on the load balancer an SSL listener on the Amazon EC2instances, Amazon EBS encryption on EBS volumes containing PHI and Amazon S3with server-side encryption.

Answer: CE

Explanation:
The AWS Documentation mentions the following: HTTPS/SSL Listeners
You can create a load balancer with the following security features. SSL Server Certificates
If you use HTTPS or SSL for your front-end connections, you must deploy an X.509 certificate (SSL server certificate) on your load balancer. The load balancer decrypts
requests from clients before sending them to the back-end instances (known as SSL termination). For more information, see SSL/TLS Certificates for Classic Load Balancers.
If you don't want the load balancer to handle the SSL termination (known as SSL offloading), you can use TCP for both the front-end and back-end connections, and deploy certificates on the registered instances handling requests.
Reference Link:
◆ http://docs.aws.a mazon.com/elasticloadbalancing/latest/classic/el b-listener-config.htm I
Create a Classic Load Balancer with an HTTPS Listener
A load balancer takes requests from clients and distributes them across the EC2 instances that are registered with the load balancer.
You can create a toad balancer that listens on both the HTTP (80) and HTTPS (443) ports. If you specify that the HTTPS listener sends requests to the instances on port 80, the load balancer terminates the requests and communication from the load balancer to the instances is not encrypted. If the HTTPS listener sends requests to the instances on port 443, communication from the load balancer to the instances is encrypted.
Reference Link:
• http://docs.aws.a mazon.com/elasticloadbalancing/latest/classic/el b-create-https-ssl-load- balancer.htm I Option A & B are incorrect because they are missing encryption in transit between ELB and EC2 instances.
Option D is incorrect because it is missing encryption at rest on the data associated with the EC2 instances.

NEW QUESTION 14
You have the following application to be setup in AWS
1) A web tier hosted on EC2 Instances
2) Session data to be written to DynamoDB
3) Log files to be written to Microsoft SQL Server
How can you allow an application to write data to a DynamoDB table?

  • A. Add an 1AM user to a running EC2 instance.
  • B. Add an 1AM user that allows write access to the DynamoDB table.
  • C. Create an 1AM role that allows read access to the DynamoDB table.
  • D. Create an 1AM role that allows write access to the DynamoDB table.

Answer: D

Explanation:
I AM roles are designed so that your applications can securely make API requests from your instances, without requiring you to manage the security credentials that
the applications use. Instead of creating and distributing your AWS credentials For more information on 1AM Roles please refer to the below link:
http://docs.aws.amazon.com/AWSCC2/latest/UserGuide/iam-roles-for-amazon-ec2.html

NEW QUESTION 15
You have a code repository that uses Amazon S3 as a data store. During a recent audit of your security controls, some concerns were raised about maintaining the integrity of the data in the Amazon S3 bucket. Another concern was raised around securely deploying code from Amazon S3 to applications running on Amazon EC2 in a virtual private cloud. What are some measures that you can implement to mitigate these concerns? Choose two answers from the options given below.

  • A. Add an Amazon S3 bucket policy with a condition statement to allow access only from Amazon EC2 instances with RFC 1918 IP addresses and enable bucket versioning.
  • B. Add an Amazon S3 bucket policy with a condition statement that requires multi-factor authentication in order to delete objects and enable bucket versioning.
  • C. Use a configuration management service to deploy AWS Identity and Access Management user credentials to the Amazon EC2 instance
  • D. Use these credentials to securely access the Amazon S3 bucket when deploying code.
  • E. Create an Amazon Identity and Access Management role with authorization to access the Amazon S3 bucket, and launch all of your application's Amazon EC2 instances with this role.
  • F. Use AWS Data Pipeline to lifecycle the data in your Amazon S3 bucket to Amazon Glacier on a weekly basis.
  • G. Use AWS Data Pipeline with multi-factor authentication to securely deploy code from the Amazon S3 bucket to your Amazon EC2 instances.

Answer: BD

Explanation:
You can add another layer of protection by enabling MFA Delete on a versioned bucket. Once you do
so, you must provide your AWS account's access keys and a
valid code from the account's MFA device in order to permanently delete an object version or suspend or reactivate versioning on the bucket.
For more information on MFA please refer to the below link: https://aws.amazon.com/blogs/security/securing-access-to-aws-using-mfa-part-3/
IAM roles are designed so that your applications can securely make API requests from your instances, without requiring you to manage the security credentials that the applications use. Instead of creating and distributing your AWS credentials, you can delegate permission to make API requests using 1AM roles For more information on Roles for CC2 please refer to the below link: http://docs.aws.a mazon.com/AWSCC2/latest/UserGuide/iam-roles-for-amazon-ec2. htmI
Option A is invalid because this will not address either the integrity or security concern completely. Option C is invalid because user credentials should never be used in CC2 instances to access AWS resources.
Option C and F are invalid because AWS Pipeline is an unnecessary overhead when you already have inbuilt controls to manager security for S3.

NEW QUESTION 16
You are trying to debug the creation of Cloudformation stack resources. Which of the following can be used to help in the debugging process?
Choose 2 answers from the options below

  • A. UseCloudtrail to debugall the API call's sent by the Cloudformation stack.
  • B. Usethe AWS CloudFormation console to view the status of yourstack.
  • C. Seethe logs in the/var/log directory for Linux instances
  • D. UseAWSConfig to debug all the API call's sent by the Cloudformation stack.

Answer: BC

Explanation:
The AWS Documentation mentions
Use the AWS Cloud Formation console to view the status of your stack. In the console, you can view a list of stack events while your stack is being created, updated, or
deleted. From this list, find the failure event and then view the status reason for that event.
For Amazon CC2 issues, view the cloud-init and cfn logs. These logs are published on the Amazon CC2 instance in the /var/log/ directory. These logs capture processes and command outputs while AWS Cloud Formation is setting up your instance. For Windows, view the L~C2Configure service and cfn logs
in %ProgramFiles%\Amazon\CC2ConfigService and C:\cfn\log.
For more information on Cloudformation Troubleshooting, please visit the below URL: http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/troubleshooting.html

NEW QUESTION 17
You are planning on configuring logs for your Elastic Load balancer. At what intervals does the logs get produced by the Elastic Load balancer service. Choose 2 answers from the options given below

  • A. 5minutes
  • B. 60minutes
  • C. 1 minute
  • D. 30seconds

Answer: AB

Explanation:
The AWS Documentation mentions
Clastic Load Balancing publishes a log file for each load balancer node at the interval you specify. You can specify a publishing interval of either 5 minutes or 60 minutes when you enable the access log for your load balancer. By default. Elastic Load Balancing publishes logs at a 60-minute interval.
For more information on Elastic load balancer logs please see the below link: http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/access-log-collection.html

NEW QUESTION 18
You are hired as the new head of operations for a SaaS company. Your CTO has asked you to make debugging any part of your entire operation simpler and as fast as possible. She complains that she has no idea what is going on in the complex, service-oriented architecture, because the developers just log to disk, and it's very hard to find errors in logs on so many services. How can you best meet this requirement and satisfy your CTO?

  • A. Copy all log files into AWS S3 using a cron job on each instanc
  • B. Use an S3 Notification Configuration on the PutBucket event and publish events to AWS Lambd
  • C. Use the Lambda to analyze logs as soon as they come in and flag issues.
  • D. Begin using CloudWatch Logs on every servic
  • E. Stream all Log Groups into S3 object
  • F. Use AWS EMR clusterjobs to perform adhoc MapReduce analysis and write new queries when needed.
  • G. Copy all log files into AWS S3 using a cron job on each instanc
  • H. Use an S3 Notification Configuration on the PutBucket event and publish events to AWS Kinesi
  • I. Use Apache Spark on AWS EMR to perform at-scale stream processing queries on the log chunks and flag issues.
  • J. Begin using CloudWatch Logs on every servic
  • K. Stream all Log Groups into an AWS Elastic search Service Domain running Kibana 4 and perform log analysis on a search cluster.

Answer: D

Explanation:
Amazon Dasticsearch Service makes it easy to deploy, operate, and scale dasticsearch for log analytics, full text search, application monitoring, and more. Amazon
Oasticsearch Service is a fully managed service that delivers Dasticsearch's easy-to-use APIs and real- time capabilities along with the availability, scalability, and security required by production workloads. The service offers built-in integrations with Kibana, Logstash, and AWS services including Amazon Kinesis Firehose, AWS Lambda, and Amazon Cloud Watch so that you can go from raw data to actionable insights quickly. For more information on Elastic Search, please refer to the below link:
• https://aws.amazon.com/elasticsearch-service/

NEW QUESTION 19
Your company is planning on using the available services in AWS to completely automate their integration, build and deployment process. They are planning on using AWSCodeBuild to build their artefacts. When using CodeBuild, which of the following files specifies a collection of build commands that can be used by the service during the build process.

  • A. appspec.yml
  • B. buildspec.yml
  • C. buildspecxml
  • D. appspec.json

Answer: B

Explanation:
The AWS documentation mentions the following
AWS CodeBuild currently supports building from the following source code repository providers. The source code must contain a build specification (build spec) file,
or the build spec must be declared as part of a build project definition. A buildspec\s a collection of build commands and related settings, in YAML format, that AWS
CodeBuild uses to run a build.
For more information on AWS CodeBuild, please refer to the below link: http://docs.aws.amazon.com/codebuild/latest/userguide/planning.html

NEW QUESTION 20
Your security officer has told you that you need to tighten up the logging of all events that occur on your AWS account. He wants to be able to access all events that occur on the account across all regions quickly and in the simplest way possible. He also wants to make sure he is the only person that has access to these events in the most secure way possible. Which of the following would be the best solution to assure his requirements are met? Choose the correct answer from the options below

  • A. Use CloudTrail to logall events to one S3 bucke
  • B. Make this S3 bucket only accessible by your security officer with a bucket policy that restricts access to his user only and also add MFA to the policy for a further level of securit
  • C. ^/
  • D. Use CloudTrail to log all events to an Amazon Glacier Vaul
  • E. Make sure the vault access policy only grants access to the security officer's IP address.
  • F. Use CloudTrail to send all API calls to CloudWatch and send an email to the security officer every time an API call is mad
  • G. Make sure the emails are encrypted.
  • H. Use CloudTrail to log all events to a separate S3 bucket in each region as CloudTrail cannot write to a bucket in a different regio
  • I. Use MFA and bucket policies on all the different buckets.

Answer: A

Explanation:
AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log,
continuously monitor, and retain events related to API calls across your AWS infrastructure. CloudTrail provides a history of AWS API calls for your account, including API calls made through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. This history simplifies security analysis, resource change tracking, and troubleshooting.
You can design cloudtrail to send all logs to a central S3 bucket. For more information on cloudtrail, please visit the below URL:
◆ https://aws.amazon.com/cloudtrail/

NEW QUESTION 21
You have a number of Cloudformation stacks in your IT organization. Which of the following commands will help see all the cloudformation stacks which have a completed status?

  • A. describe-stacks
  • B. list-stacks
  • C. stacks-complete
  • D. list-templates

Answer: B

Explanation:
The following is the description of the list-stacks command
Returns the summary information for stacks whose status matches the specified StackStatusFilter.
Summary information for stacks that have been deleted is kept for 90 days after the stack is deleted. If no stack-status-filter is specified, summary information for all stacks is returned (including existing stacks and stacks that have been deleted).
For more information on the list-stacks command please visit the below link http://docs.aws.amazon.com/cli/latest/reference/cloudformation/list-stacks. html

NEW QUESTION 22
......

Recommend!! Get the Full DOP-C01 dumps in VCE and PDF From 2passeasy, Welcome to Download: https://www.2passeasy.com/dumps/DOP-C01/ (New 116 Q&As Version)