How computing has evolved, and why you need a multi-cloud strategy
Information technology has been moving fast for several years, bringing more powerful and agile computation in the cloud, richer software, better analytics, mobility, and sensors. If only most enterprise technology vendors were keeping up. The incumbents were schooled in the old world of proprietary systems, higher switching costs, and vendor lock-in, and it shows in how they see the world.
There is no better example of this than in the trend to hybrid- and multi-cloud computing. In both cases, cloud-era technologies provide customers the ability to better use existing assets and take advantage of newer ways to compute, store, and analyze data. This is not theory, but reality. According to Gartner, 81% of organizations are working with two or more public cloud providers. A multi-cloud strategy gives companies the freedom to use the best possible cloud for each workload.
In contrast, single-cloud stacks impose a significant cost. Where there could be greater power drawn from the unique capabilities of every cloud, there is higher complexity and the limitation of proprietary systems. Where there could be more insight, there is siloed data. Where there could be resilience of entirely different systems, there is concentrated risk. Where there could be more innovation and efficiency, there are impediments. Where there could be a single view of assets, there is a lack of control, haphazard security, and opaque costs.
At Google Cloud, we’re committed to meeting the needs of customers by providing choice, flexibility and openness. This commitment is reflected in our contributions to projects like Kubernetes, TensorFlow, and many more.
Google Cloud is the birthplace and home of the Kubernetes project. Created by the same engineers that built Kubernetes,Google Kubernetes Engine (GKE) is an easy-to-use cloud-based Kubernetes service for running containerized applications—everywhere, not just on GCP. Anthos builds on the firm foundations of GKE, so you can build out hybrid and multi-cloud deployments with better cloud software production, release, and management—the way you want, not how a vendor dictates. That is key to how a healthy cloud ecosystem works.
The flexibility to run applications where you need them without added complexity has been a key factor in choosing Anthos—many customers want to continue to use their existing investments both on-premises as well as in other clouds, and having a common management layer helps their teams deliver quality services with low overhead.
Today, just two years after launch, Anthos now supports more kinds of workloads, in more kinds of environments, in many more locations. According to Forrester, Anthos brings a 40% to 55% improvement in platform operating efficiency. Taking multi-cloud even further, recently we announced Anthos on bare metal, so customers could have high performance computing with minimal latency in even remote locations. And the leading API management platform, Apigee, works on every cloud or on-premises, just as it should.
Anthos is but one part of our commitment to maximize customer power, choice, and control wherever possible. In July we announced BigQuery Omni, a multi-cloud version of our popular analytics services. For the first time, an enterprise can seamlessly connect directly to their data across Google Cloud, Amazon Web Services (AWS), and (soon) Microsoft Azure, managing large-scale data analysis fast, without having to move or copy data sets, on a single user interface.
Earlier this year Google Cloud announced the acquisition of Looker, a multi-cloud data analysis platform that supports multiple data sources and deployment methods. Naturally, Looker as part of Google Cloud still supports hosting on public clouds like AWS, and connects with data sources like Redshift, Snowflake, BigQuery and more than 50 other supported SQL dialects, so you can link to multiple databases, avoid database lock-in, and maintain multi-cloud data environments.
From open source to multi-cloud to what might be called “analytics anywhere,” our strategy is not based on our predetermined need, or some sense of “how it’s always been” in enterprise computing, but rather on Google’s experience and vision of how computing has evolved, and where it’s likely headed.
Computing wants to be everywhere, you might say, with the right machine crunching the right data for the right purpose. Done right, that’s the future: Enabling businesses to innovate and compete wherever they want, using the data they own to best serve their customers with better products and services.
We’re confident that history is on the side of open-source based multi-cloud APIs. Years ago, open source was condemned, and sometimes forked, to preserve a provider’s power over customers. Eventually it was allowed, and today it’s welcomed. Now it’s multi-cloud’s turn to move from rejection to acceptance and eventually, ubiquity.
There’s a good chance that soon your cloud will do even more of what it should have done in the first place. Watch this space.
Related Google News:
- Google Cloud announces new region to support growing customer base in Israel April 28, 2021
- How to transfer your data to Google Cloud April 27, 2021
- Choose the best way to use and authenticate service accounts on Google Cloud April 27, 2021
- Apply context-aware access policies to mobile and desktop applications April 27, 2021
- Turbocharge workloads with new multi-instance NVIDIA GPUs on GKE April 27, 2021
- How to use multi-VPC networking in Google Cloud VMware Engine April 26, 2021
- Recommended strategies and best practices for designing and developing games and stories on… April 26, 2021
- La que nos une: Univision partners with Google Cloud to transform media experiences April 26, 2021