google ACE sample questions
Q1- You work in a company that processes large amounts of loT data (time-stamped), which can be petabytes in size. You need to write and change data at high speed. Which Google Cloud product should you use?
- Cloud Datastore
- Cloud Storage
- Cloud Bigtable
- BigQuery
Ans:Cloud Bigtable
Q2- …is the root node of the Google Cloud resource hierarchy and all resources that belong to an organization are grouped under this node.
- Organization…
- Project…
- Folder…
ans:- Organization…
Explanation:The Organization resource is the root node of the Google Cloud resource hierarchy and all resources that belong to an organization are grouped under the organization node. This provides central visibility and control over every resource that belongs to an organization.
Q3- Your company has all the Compute Engine resources in the europe- central2 region. You want to set. europe-central2 as the default region for gcloud command line tool. Which command should you use?
- > gcloud config set compute/re gion europe- central2
- > gcloud config set compute/zone europe- central2
- > gcloud config set project europe- central2
- gcloud config set region europe- central2
Ans:1.> gcloud config set compute/re gion europe- central2
Q4- Select all true statements about persistent disks in GCP. (select 2)
- Persistent disks are automatically encrypted to protect your data, in transit or at rest.
- Persistent disk is independent of the virtual machine instances, so you can detach or move your disks to keep your data even after you delete your instances.
- You can’t add more persistent disks to an instance to meet your performance and storage space requirements.
You can’t resize your existing persistent disks to meet your performance and storage space requirements.
Ans: 1 & 2
Explanation:Persistent disks are durable network storage devices that your instances can access like physical disks in a desktop or a server. The data on each persistent disk is distributed across several physical disks. Compute Engine manages the physical disks and the data distribution for you to ensure redundancy and optimal performance. Persistent disks are located independently from your virtual machine (VM) instances, so you can detach or move persistent disks to keep your data even after you delete your instances. Persistent disk performance scales automatically with size, so you can resize your existing persistent disks or add more persistent disks to an instance to meet your performance and storage space requirements.
Q5.- A regular batch job transfers customer data from a CRM system to BigQuery dataset and uses several virtual machines. You can tolerate some virtual machines going down. What should you do to reduce the costs of this job?
- You should only use e2- micro instances.
- You should use a fleet of e2-micro instances behind a Managed Instances Group with autoscaling enabled.
- You should use preemptible compute engine instances.
- You should only use e2- standard-32 instances.
Ans: 3-You should use preemptible compute engine instances.
Explanation:-Preemptible VM instances are available at much lower price – a 60- 91% discount – compared to the price of standard VMs. However, Compute Engine might stop (preempt) these instances if it needs to reclaim the compute capacity for allocation to other VMs. Preemptible instances use excess Compute Engine capacity, so their availability varies with usage..
Q6- The first grouping mechanism of the Google Cloud resource hierarchy is represented by…
- projects.
- organizations.
- resources.
- folders.
Ans: 1. projects.
Q7- Select all true statements about virtual machines in GCP. (select 2)
- Preemptible virtual machines might be stopped by Compute Engine if it needs to reclaim the compute capacity for allocation to other virtual machines.
- Compute Engine doesn’t support auto-scaling.
- You should use large virtual machines for in- memory databases and CPU-intensive analytics and multiple virtual machines for fault tolerance and flexibility.
- You cannot create virtual machines from Cloud Shell.
Ans:1 and 3
Q8 -You need to create a Kubernetes Engine cluster to deploy multiple pods and use BigQuery to store all container logs for later analysis. What solution should you apply to follow Google’s best practices?
- Enable Cloud Monitoring when creating a Kubernetes Engine cluster.
- Enable Cloud Logging when creating a Kubernetes Engine cluster.
- You should use the Cloud Logging export feature to create a sink to Cloud Storage, than create a Cloud Dataflow job that imports log files from Cloud Storage to BigQuery.
- The only solution is to develop a custom add-on that uses the Cloud Logging API and BigQuery API.
Ans: 2
Explanation: Cloud Logging is a fully managed service that allows you to store,search, analyze, monitor, and alert on logging data and events from Google Cloud and Amazon Web Services. You can collect logging data from over 150 common application components, on-premises systems, and hybrid cloud systems. Logging includes storage for logs through log buckets, a user interface called the Logs Explorer, and an API to manage logs programmatically. Logging lets you read and write log entries, query your logs, and control how you route and use your logs.
Q9- A mission-critical application is migrated to Google Kubernetes Engine from your on-premises data center and uses e2-standard-4 machine types. How can you deploy additional pods on e2-standard-32 machine types without causing application downtime?
- You should create a new cluster with node pool instances with e2-standard-32 machine types. Then deploy the application on the new cluster and remove the older one.
- You should create a new cluster with two node pools – one with e2-standard-4 machine types and other with e2-standard-32 machine types. Then deploy the application on this new cluster and remove the older one.
- You should update the existing cluster to add a new node pool with e2- standard-32 machine types and deploy the pods.
- You should update the existing cluster to add a new node pool with e2- standard-32 machine types and deploy the pods.
Ans: 2
Explanation: :A node pool is a group of nodes within a cluster that all have the same configuration. Node pools use a NodeConfig specification. When you create a cluster, the number of nodes and type of nodes that you specify are used to create the first node pool of the cluster. Then, you can add additional node pools of different sizes and types to your cluster. All nodes in any given node pool are identical to one another. https://cloud.google.com/kubernetes -engine/docs/concepts/node-pools
Q10- You want to assign GCP accounts for new employees. Is it a good practice for new GCP users to start working with GCP using a Gmail account.
- No. For example, if someone leaves your organization, there is no centralized way to remove their access to your cloud resources immediately
- Yes, it’s a good practice.
- It isn’t possible to say clearly, it depends on the specific case.
Ans: 1
Explanation::https://cloud.google.com/iam/docs/how-to
Q11- Select all true statements about namespaces in Kubernetes. (select 2)
- It’s a best practice to create namespaces with the prefix kube-
- Namespaces provide a mechanism for isolating groups of resources within a single cluster.
- Namespaces increase security,
- Namespaces let you implement resource quotas across your cluster.
Ans: 2 and 4
Q12- Select all true statements about Virtual Private Cloud (VPC). (select 2)
- A Virtual Private Cloud (VPC) network is a virtual version of a physical network, implemented inside of production Google’s network.
- VPC networks, including their associated routes and firewall rules, are global resources.
- VPC networks cannot be connected to other VPC networks in different projects.
- Project can only contain one VPC network.
Ans: 1 and 2
Explanation:A Virtual Private Cloud (VPC) network is a virtual version of a physical network, implemented inside of Google’s production network. Projects can contain multiple VPC networks. Unless you create an organizational policy that prohibits it, new projects start with a default network (an auto mode VPC network) that has one subnetwork (subnet) in each region. https://cloud.google.com/vpc/docs/vpc
Q13- As a Cloud Engineer, you want to automatically back up your Compute Engine workloads. What should you do?
- You should create a snapshot manually.
- You should create a snapshot schedule to regularly and automatically back up your data.
- You should create a machine image schedule to regularly and automatically back up your data.
Ans: 2
Explanation:-Snapshots incrementally back up data from your persistent disks. After you create a snapshot to capture the current state of the disk, you can use it to restore that data to a new disk. Compute Engine stores multiple copies of each snapshot across multiple locations with automatic checksums to ensure the integrity of your data. You can create snapshots from disks even while they are attached to running virtual machine (VM) instances. The lifecycle of a snapshot created from a disk attached to a running VM instances is independent of the lifecycle of the VM instance.
Q14- Select all options where you use custom roles. (select 2) should
- You don’t like the name of the predefined role and want to create your own custom role.
- A principal needs a permission, but each predefined role that includes that permission also includes permissions that the principal doesn’t need and shouldn’t have.
- Recommender suggests that you should remove or replace a role that gives your principals excess permissions.
Ans: 2 AND 3
Explanation: :IAM lets you create custom IAM roles. Custom roles help you enforce the principle of least privilege, because they help to ensure that the principals in your organization have only the permissions that they need.Consider creating a custom role in the following situations:- a principal needs a permission, but each predefined role that includes that permission also includes permissions that the principal does not need and should not have.- you use role recommendations to replace overly permissive role grants with more appropriate role grants. In some cases, you might receive a recommendation to create a custom role
Q15- Which storage service should you use for semi-structured application key- value data?
- Cloud Spanner
- Cloud Datastore
- Cloud SQL
- Cloud Storage
- BigQuery
- Cloud Bigtable
Ans: Cloud Datastore