Monthly:November 2020

Improvements to PDFs imported to Google Docs

Quick launch summary 

We’re making a range of updates that will make PDFs converted into Google Docs better. Specifically, you may notice improvements in: 
  • Image imports, including the image itself and text wrapping related to images. 
  • Text styles and formatting, such as importing underline and strikethrough, background color, and more fonts. 
  • Layout conversion, including support for multi-column layouts, custom page sizes, tables with borders, and improved content ordering.
Importing PDFs into Google Docs now supports more formatting options.

Getting started 

Rollout pace 


  • Available to Google Workspace Essentials, Business Starter, Business Standard, Business Plus, Enterprise Essentials, Enterprise Standard, and Enterprise Plus, as well as G Suite Basic, Business, Education, Enterprise for Education, and Nonprofits customers 


Read More

TensorFlow Recommenders: Scalable retrieval and feature interaction modelling

Posted by Ruoxi Wang, Phil Sun, Rakesh Shivanna and Maciej Kula (Google)

In September, we open-sourced TensorFlow Recommenders, a library that makes building state-of-the-art recommender system models easy. Today, we’re excited to announce a new release of TensorFlow Recommenders (TFRS), v0.3.0.

The new version brings two important features, both critical to building and deploying high-quality, scalable recommender models.

The first is built-in support for fast, scalable approximate retrieval. By leveraging ScaNN, TFRS now makes it possible to build deep learning recommender models that can retrieve the best candidates out of millions in milliseconds – all while retaining the simplicity of deploying a single “query features in, recommendations out” SavedModel object.

The second is support for better techniques for modelling feature interactions. The new release of TFRS includes an implementation of Deep & Cross Network: efficient architectures for learning interactions between all the different features used in a deep learning recommender model.

If you’re eager to try out the new features, you can jump straight into our efficient retrieval and feature interaction modelling tutorials. Otherwise, read on to learn more!

Efficient retrieval

The goal of many recommender systems is to retrieve a handful of good recommendations out of a pool of millions or tens of millions of candidates. The retrieval stage of a recommender system tackles the “needle in a haystack” problem of finding a short list of promising candidates out of the entire candidate list.

As discussed in our previous blog post, TensorFlow Recommenders makes it easy to build two-tower retrieval models. Such models perform retrieval in two steps:

  1. Mapping user input to an embedding
  2. Finding the top candidates in embedding space

The cost of the first step is largely determined by the complexity of the query tower model. For example, if the user input is text, a query tower that uses an 8-layer transformer will be roughly twice as expensive to compute as one that uses a 4-layer transformer. Techniques such as sparsity, quantization, and architecture optimization all help with reducing this cost.

However, for large databases with millions of candidates, the second step is generally even more important for fast inference. Our two-tower model uses the dot product of the user input and candidate embedding to compute candidate relevancy, and although computing dot products is relatively cheap, computing one for every embedding in a database, which scales linearly with database size, quickly becomes computationally infeasible. A fast nearest neighbor search (NNS) algorithm is therefore crucial for recommender system performance.

Enter ScaNN. ScaNN is a state-of-the-art NNS library from Google Research. It significantly outperforms other NNS libraries on standard benchmarks. Furthermore, it integrates seamlessly with TensorFlow Recommenders. As seen below, the ScaNN Keras layer acts as a seamless drop-in replacement for brute force retrieval:

# Create a model that takes in raw query features, and
# recommends movies out of the entire movies dataset.
# Before
# index = tfrs.layers.factorized_top_k.BruteForce(model.user_model)
# index.index(movies.batch(100).map(model.movie_model), movies)
# After
scann = tfrs.layers.factorized_top_k.ScaNN(model.user_model)
scann.index(movies.batch(100).map(model.movie_model), movies)

# Get recommendations.
# Before
# _, titles = index(tf.constant(["42"]))
# After
_, titles = scann(tf.constant(["42"]))
print(f"Recommendations for user 42: {titles[0, :3]}")

Because it’s a Keras layer, the ScaNN index serializes and automatically stays in sync with the rest of the TensorFlow Recommender model. There is also no need to shuttle requests back and forth between the model and ScaNN because everything is already wired up properly. As NNS algorithms improve, ScaNN’s efficiency will only improve and further improve retrieval accuracy and latency.

ScaNN can speed up large retrieval models by over 10x while still providing almost the same retrieval accuracy as brute force vector retrieval.
ScaNN can speed up large retrieval models by over 10x while still providing almost the same retrieval accuracy as brute force vector retrieval.

We believe that ScaNN’s features will lead to a transformational leap in the ease of deploying state-of-the-art deep retrieval models. If you’re interested in the details of how to build and serve ScaNN based models, have a look at our tutorial.

Deep cross networks

Effective feature crosses are the key to the success of many prediction models. Imagine that we are building a recommender system to sell blenders using users’ past purchase history. Individual features such as the number of bananas and cookbooks purchased give us some information about the user’s intent, but it is their combination – having bought both bananas and cookbooks – that gives us the strongest signal of the likelihood that the user will buy a blender. This combination of features is referred to as a feature cross.

Chart of cross features in deep cross networks

In web-scale applications, data are mostly categorical, leading to large and sparse feature space. Identifying effective feature crosses in this setting often requires manual feature engineering or exhaustive search. Traditional feed-forward multilayer perceptron (MLP) models are universal function approximators; however, they cannot efficiently approximate even 2nd or 3rd-order feature crosses as pointed out in the Deep & Cross Network and Latent Cross papers.

What is a Deep & Cross Network (DCN)?

DCN was designed to learn explicit and bounded-degree cross features more effectively. They start with an input layer (typically an embedding layer), followed by a cross network which models explicit feature interactions, and finally a deep network that models implicit feature interactions.

Cross Network

This is the core of a DCN. It explicitly applies feature crossing at each layer, and the highest polynomial degree (feature cross order) increases with layer depth. The following figure shows the (𝑖+1)-th cross layer.

Cross layer visualization. x0 is the base layer (typically set as the embedding layer), xi is the input to the cross layer, ☉ represents element-wise multiplications, and matrix W and vector b are the parameters to be learned.
Cross layer visualization. x0 is the base layer (typically set as the embedding layer), xi is the input to the cross layer, ☉ represents element-wise multiplications, and matrix W and vector b are the parameters to be learned.

When we only have a single cross layer, it creates 2nd-order (pairwise) feature crosses among input features. In the blender example above, the input to the cross layer would be a vector that concatenates three features: [country, purchased_bananas, purchased_cookbooks]. Then, the first dimension of the output would contain a weighted sum of pairwise interactions between country and all the three input features; the second dimension would contain weighted interactions of purchased_bananas and all the other features, and so on.

The weights of these interaction terms form the matrix W: if an interaction is unimportant, its weight will be close to zero. If it is important, it will be away from zero.

To create higher-order feature crosses, we could stack more cross layers. For example, we now know that a single cross layer outputs 2nd-order feature crosses such as interaction between purchased_bananas and purchased_cookbook. We could further feed these 2nd-order crosses to another cross layer. Then, the feature crossing part would multiply those 2nd-order crosses with the original (1st-order) features to create 3rd-order feature crosses, e.g., interactions among countries, purchased_bananas and purchased_cookbooks. The residual connection would carry over those feature crosses that have already been created in the previous layer.

If we stack k cross layers together, the k-layered cross network would create all the feature crosses up to order k+1, with their importance characterized by parameters in the weight matrices and bias vectors.

Deep Network

The deep part of a Deep & Cross Network is a traditional feedforward multilayer perceptron (MLP).

The deep network and cross network are then combined to form DCN. Commonly, we could stack a deep network on top of the cross network (stacked structure); we could also place them in parallel (parallel structure).

Deep & Cross Network (DCN) visualization. Left: parallel structure; Right: stacked structure.
Deep & Cross Network (DCN) visualization. Left: parallel structure; Right: stacked structure.

Model Understanding

A good understanding of the learned feature crosses helps improve model understandability. Fortunately, the weight matrix 𝑊 in the cross layer reveals what feature crosses the model has learned to be important.

Take the example of selling a blender to a customer. If purchasing both bananas and cookbooks is the most predictive signal in the data, a DCN model should be able to capture this relationship. The following figure shows the learned matrix of a DCN model with one cross layer, trained on synthetic data where the joint purchase feature is most important. We see that the model itself has learned that the interaction between `purchased_bananas` and `purchased_cookbooks` is important, without any manual feature engineering applied.

Learned weight matrix in the cross layer.
Learned weight matrix in the cross layer.

Cross layers are now implemented in TensorFlow Recommenders, and you can easily adopt them as building blocks in your models. To learn how, check out our tutorial for example usage and practical lessons. If you are interested in more detail, have a look at our research papers DCN and DCN v2.


We would like to give a special thanks to Derek Zhiyuan Cheng, Sagar Jain, Shirley Zhe Chen, Dong Lin, Lichan Hong, Ed H. Chi, Bin Fu, Gang (Thomas) Fu and Mingliang Wang for their critical contributions to Deep & Cross Network (DCN). We also would like to thank everyone who has helped with and supported the DCN effort from research idea to productionization: Shawn Andrews, Sugato Basu, Jakob Bauer, Nick Bridle, Gianni Campion, Jilin Chen, Ting Chen, James Chen, Tianshuo Deng, Evan Ettinger, Eu-Jin Goh, Vidur Goyal, Julian Grady, Gary Holt, Samuel Ieong, Asif Islam, Tom Jablin, Jarrod Kahn, Duo Li, Yang Li, Albert Liang, Wenjing Ma, Aniruddh Nath, Todd Phillips, Ardian Poernomo, Kevin Regan, Olcay Sertel, Anusha Sriraman, Myles Sussman, Zhenyu Tan, Jiaxi Tang, Yayang Tian, Jason Trader, Tatiana Veremeenko‎, Jingjing Wang, Li Wei, Cliff Young, Shuying Zhang, Jie (Jerry) Zhang, Jinyin Zhang, Zhe Zhao and many more (in alphabetical order). We’d also like to thank David Simcha, Erik Lindgren, Felix Chern, Nathan Cordeiro, Ruiqi Guo, Sanjiv Kumar, Sebastian Claici, and Zonglin Li for their contributions to ScaNN.

Read More

Advancing healthcare with the Healthcare Interoperability Readiness Program

The 21st Century Cures Act, a United States law enacted at the end of 2016, mandates patient data interoperability for payers, providers, and healthcare organizations. As we approach rolling implementation deadlines, healthcare organizations are wrestling with how to liberate data from siloed systems—not only to give patients more granular control of their data, but also to improve outcomes by giving doctors a more complete view into their patients’ conditions.

The stakes are significant. Yet, in speaking with our customers, the number of healthcare organizations that feel prepared to meet these new requirements is small. Why is this the case? In short, providers and payers aren’t sure where to start. And with many critical applications running on legacy IT systems that aren’t built on modern web standards, the goal can seem daunting.

That’s why today, we’re launching the Google Cloud Healthcare Interoperability Readiness Program. The program is designed to help healthcare organizations:

  • Understand their current interoperability maturity levels;

  • Map out a stepwise journey to enable interoperability; and 

  • Navigate changes and increase their readiness for the new Office of the National Coordinator for Health Information Technology (ONC) and the Centers for Medicare and Medicaid Services (CMS) rules.

With COVID-19 underscoring the importance of even more data sharing and flexibility, the next few years promise to accelerate data interoperability and the adoption of open standards even further—ideally ushering in new and meaningful partnerships across the care continuum, new avenues for business growth, and new pathways for patient-centered innovation.  

Our program is built to meet customers wherever they are on their interoperability journeys, and to empower them with tailored services, technologies and strategies. We’re working with a variety of both consultants and ISV partners like Bain & Company, Boston Consulting Group, Deloitte, HCL Technologies, KPMG, MavenWave, Pluto7, SADA and 8K Miles to meet our customers’ unique needs and support the changes needed to meet the upcoming regulatory requirements.   

How is interoperability achieved?
Just as interoperability is foundational to achieving the transformational goals in healthcare for everything from telemedicine to app-based healthcare ecosystems, application programming interfaces, or APIs, are the foundation for interoperability. APIs have been around for decades and allow data to flow across disparate systems. Whereas older APIs were designed for bespoke integration projects, modern APIs are designed to be easy for developers and have become the standard for building mobile applications.

API management tools can put a security gateway between patient data and developers or apps, helping to protect a patient’s control over access to and uses of their data. And with API management, healthcare organizations can pursue the same sort of innovative models around healthcare data, while also applying governance and security controls, streamlining infrastructure complexities, and maintaining regulatory compliance and patient privacy. 

In addition to APIs, implementation of  open data standards—such as FHIR—is another critical step toward interoperability. We’ve worked closely with the U.S. Department of Health and Human Services (HHS) and collaborated across the tech industry to support open standards to electronically exchange healthcare information and build an ecosystem that supports data privacy, security, compliance, and API management.

How can the Google Cloud Interoperability Readiness Program help? 
Google Cloud has long supported data interoperability and an API-based ecosystem to reduce friction surrounding healthcare data. Through our Healthcare Interoperability Readiness Program, we’ll help customers understand the current status of their data and where it resides, map out a path to standardization and integration, and make use of data in a secure, reliable, and compliant manner. 

This program provides a comprehensive set of services for interoperability, including: 

  • HealthAPIx Accelerator provides the jumpstart for the interoperability implementation efforts. With best practices, pre-built templates and lessons learned from our customer and partner implementations, it offers a blueprint for healthcare stakeholders and app developers to build FHIR API-based digital experiences.

  • Apigee API Management provides the underpinning and enables a security and governance layer to deliver, manage, secure and scale APIs; consume and publish FHIR-ready APIs for partners and developers; build robust API analytics, and accelerate rollout of digital solutions.

  • Google Cloud Healthcare API enables secure methods (including de-identification) for ingesting, transforming, harmonizing, and storing your data in the latest FHIR formats, as well as HL7v2 and DICOM, and serves as a secondary longitudinal data store to streamline data sharing, application development, and analytics with BigQuery. 

  • Interoperability toolkit that includes solution architectures, implementation guides, sandboxes and other resources to help accelerate interoperability adoption and streamline compliance with standards such as FHIR R4. 

As we reflect on the lessons of COVID-19, building resilient interoperable health infrastructure will not only be a catalyst, but table stakes for delivering better care. The Healthcare Interoperability Readiness Program aims to help free up patient data and make it more accessible across the continuum of care, as well as set up organizations for long-term success with more modern, API-first architectures. We’re eager to help payers, providers, and life sciences organizations navigate these changes—and ultimately save patient lives.

Read More

Google charts the course for a modern data cloud

Google Cloud is a leader when it comes to data, and in the past few years, we’ve made leaps and bounds to help our customers level up their enterprise databases and analytics capabilities. Our data platform is a primary reason why the largest enterprises in the world like The Home Depot, HSBC, and UPS run their mission-critical applications on Google Cloud. We’ve also seen momentum in the analyst community, with Gartner, Forrester, and IDC validating our leadership in analyst evaluations across data analytics, databases, and AI. Our fully managed database and analytics services continue to power enterprise digital transformation as the always-on, hyperconnected world drives migrations to the cloud. 

Google was built to organize the world’s information and make it universally accessible and useful. To deliver on this vision, we process and analyze the world’s largest data sets on the cleanest and most reliable cloud infrastructure. We have leveraged this expertise to deliver a new kind of enterprise-ready data cloud to our customers that is simple, intelligent, and open. It offers built-in automation to ensure your data-first business is operating at its best, with the simplicity to build whatever is next. 

Let’s dive into five reasons why we lead in the data cloud space.

1. Leading analyst firms agree that Google Cloud’s database and analytics are proven and enterprise-ready for any size data team. Today, our customers process and analyze up to petabytes of data on the world’s most advanced and scalable data platform. Customers of every size and maturity are able to seamlessly grow from small prototypes to global success. Cloud Spanner leads the relational world with its unique pairing of a relational operational database with non-relational scale. Cloud Bigtable unlocks high-throughput, low-latency applications and supports customers with millions of queries per second. Google Cloud delivers industry-leading reliability across regions, so you’re always up and running to support your mission-critical applications. Google has some of the highest SLAs in the industry. Spanner includes up to a 99.999% SLA and BigQuery recently announced a 99.99% SLA. When it comes to performance, third-party analyst firms recognize that Google Cloud is a leader when it comes to high-performance and scalable data management for analytics. To bring this all together, you need robust security and governance controls to protect customer’s data. Our customer’s data is encrypted by default, and identity and access management across our solutions are provided by Cloud Identity and Access Management (Cloud IAM). 

2. Google Cloud is one of the fastest-growing cloudsin the world across industries.We’re seeing growth across customer segments and industry verticals. BigQuery is widely perceived as the leading solution for analytics and data warehousing; Looker, with its multi-cloud universal semantic layer, gets people to insights from data quickly; and our database services, like Spanner and Cloud SQL, power the most mission-critical applications while redefining the bounds of scale, availability, and performance. Our document database, Cloud Firestore, even has the most satisfied developers compared to any other cloud databases on the market, according to a recent study by SlashData. Over the past year we’ve seen Cloud SQL’s popularity grow—it’s now one of Google Cloud’s fastest-growing top services. With the release of Database Migration Service, we’re now making it even easier for enterprises to move to Cloud SQL from on-premises or other clouds without disrupting their business. Our leadership in the data realm is a primary reason organizations like HSBC, Major League Baseball, Mayo Clinic, and Sharechat choose Google to run their data-driven applications. And that’s also why IDC named our data platform a leader in their 2020 MarketScape report on APeJ Cloud Data Analytics Platform Vendors.

3. Google Cloud’s databases and analytics operate with an open philosophy, which includes open source software and databases, open APIs, open data formats, and an inclusive partner ecosystem. Customers can choose from a wide range of operational and analytical engines, open source tools, and machine learning services. Cloud SQL provides a managed service for the world’s most popular open source databases, MySQL and PostgreSQL, so customers can benefit from the latest community enhancements paired with enterprise-grade availability, security, and performance. And open APIs ease migrations, portability, and data access through your preferred tools. In addition, our open platform enables out-of-the-box interoperability between a variety of services for ingestion, storage, processing and analytics—including Apache Spark, Presto and more. And with our rich partner ecosystem and integrations with core Google services (such as Google Analytics), you can quickly and seamlessly integrate with the data sources and technologies you and your team know and love. We are committed to partner and customer success. Our open, partner-friendly platform not only helps our customers scale their data and analytics needs, but helps our partners like Elastic, Confluent, and MongoDB scale their cloud go-to-market.

4. Google makes it easy for enterprises to solve their biggest data-driven problems with packaged horizontal and vertical analytics solutions, embedded with market-leading AI. Packaged, priced, and supported by partners, solutions range from improving contact center operations and document processing to targeted industry solutions for healthcare, retail, manufacturing and industrial, financial services, and media and entertainment. As companies look to expand their business across new channels and deliver real-time experiences, Firestore helps accelerate mobile, web, and IoT application development. Firestore enables developers to quickly build reliable, real-time applications at scale that can handle the changing demands of today’s business. For companies that want to build their own analytics solutions and ask questions of their data, we’ve made it easier for anyone within the organization—from the business user to the data scientist—to get insights from data with BigQuery ML, Dataproc Hub, Connected Sheets, and Data QnA. “With Connected Sheets, we’re not really pulling the data into the spreadsheet; rather it lives in the database where it belongs,” says Peter Van Nieuwerburgh, Global Change Manager at PwC. “The ability to go and so easily analyze and visualize the data is really powerful.”

5. We’re the only hyperscale cloud provider that’s executed on a multi-cloud vision. Google Cloud’s commitment to multi-cloud enables customers to use their data where and how they want. Customers can build or modernize their apps anywhere and deliver new app features faster, enabling success in this rapidly changing environment. Industry analysts have recognized Google Cloud as one of the only hyperscale cloud vendors to deliver on the promise of multi-cloud. In addition to facilitating customer innovation with our dedication to open standards, Google Cloud ensures customers can choose the right cloud vendor or environment for each of their workloads, removing over-dependence on one IT vendor. Customers can run apps wherever they want and get the management and support that comes with Google Cloud, creating opportunities for developers to rapidly build and innovate in any environment, including on-premises. With solutions like Anthos, customers can run cloud offerings in a hybrid environment using containers. This architecture also runs in a multi-cloud environment and today runs on AWS. Moving up the stack, Dataproc on Kubernetes allows enterprises to build containerized Spark machine learning and data processing jobs that can be deployed anywhere. Additionally, BigQuery Omni allows you to analyze data in AWS using standard SQL, and without leaving the familiar BigQuery interface. 

We’re just getting started
Google is a leader when it comes to data. By building a data infrastructure that powers Google products used by billions of people, such as Search, Maps, Ads, and YouTube, we have stress-tested our systems, services, and expertise. We have used this expertise to deliver a new kind of data cloud to every enterprise, with fully managed automations to ensure your data-first business is operating at the highest level and the simplicity to build whatever is next. We will continue to help our customers spend less time on management and more time on building. That means continuing to deliver a data cloud that creates an integrated experience across multi-cloud and hybrid environments for your enterprise data and analytics needs. 

Stay tuned for more to come, and get started today with our free trial offering

Gartner, Magic Quadrant for Cloud Database Management Systems,  November 23, 2020, Donald Feinberg, Adam Ronthal, Merv Adrian, Henry Cook, Rick Greenwald

Gartner does not endorse any vendor, product or service depicted in its research publications and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Read More

Gartner 2020 Magic Quadrant for Cloud Database Management Systems names Google a Leader

We’re announcing today that Google has been named a Leader in the first-ever Gartner Magic Quadrant for Cloud Database Management Systems (DBMS), 2020. We believe this recognition is due to Google Cloud’s data analytics and databases vision and strategy, and is echoed by the growth in customers across all industries and geographies selecting Google Cloud as their data platform of choice. 

Gartner 2020 Magic Quadrant for Cloud Database Management Systems names Google a Leader

Gartner has positioned Google as a Magic Quadrant Leader among the furthest three positioned vendors on the completeness-of-vision axis. We are delivering on our multi-cloud and hybrid promise, showcasing adoption across a diverse customer base in every region and industry, setting the new standard for flexible pricing with strong financial governance capabilities, and partnering across a diverse ecosystem. We’re making our vision a reality and are proud of the work we’re doing as the first hyperscale provider to offer a multi-cloud data warehouse with BigQuery Omni. In addition, we offer the industry’s most flexible pricing with Committed Use Discounts across Cloud SQL engines; instant insights for your entire business with Data QnA; and even better reliability and development experience with Cloud Spanner, just to name a few.

In today’s world, it’s clear that you have to consider a comprehensive end-to-end ecosystem of data analytics and database services to get full value from your data. So it doesn’t make sense to evaluate analytic and operational use cases in isolation. Per our understanding, as the operational and analytic markets for database management systems (DBMSs) have converged, Gartner has converged its evaluations into a single DBMS Magic Quadrant, with vendors and products that provide support for both classes of use cases. Having been a Leader in both of the previous Magic Quadrants, we’re very supportive of this move, since it aligns to the way our enterprise customers buy, deploy, and consume our services. 

Moving the focus to customer innovation, not infrastructure

Enterprises like Procter & GambleVodafone, and Sharechat have trusted Google Cloud to help them build and scale their products faster while improving digital customer experiences using our fully managed data platform.  

“We’re always looking to ensure a great consumer experience across all our categories, from healthcare to beauty products and much more,” says Vittorio Cretella, CIO, Procter & Gamble. “As a leader in analytics and AI, Google Cloud is a strategic partner helping us offer our consumers superior products and services that provide value in a secure and transparent way.”

We are honored to be a Leader in the 2020 Gartner Magic Quadrant for Cloud Database Management Systems (DBMS), and look forward to continuing to innovate and partner with you on your digital transformation journey. 

Download the full 2020 Gartner Magic Quadrant for Cloud Database Management Systems report. 

You can get started for free with Google Cloud today

Gartner, Magic Quadrant for Cloud Database Management Systems, November 23, 2020, Donald Feinberg, Adam Ronthal, Merv Adrian, Henry Cook, Rick Greenwald

This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from Google Cloud.

Gartner does not endorse any vendor, product or service depicted in its research publications and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Read More

Anthos on bare metal, now GA, puts you back in control

Enterprise IT organizations want it all, don’t they? Choice and freedom in their technology choices, but also automation, security, scale, and support. From the beginning, Anthos has been about putting you back in charge of how you consume the cloud (private or public), while imparting some of the best practices we’ve learned from running a global cloud at scale. With Anthos on bare metal, now generally available, we’ve gone a step further. 

Anthos on bare metal opens up new possibilities for how you run your workloads, and where. Some of you want to run Anthos on your existing virtualized infrastructure, but others want to eliminate the dependency on a hypervisor layer, to modernize applications while reducing costs. For example, you may consider migrating VM-based apps to containers, and you might decide to run them at the edge on resource-constrained hardware. 

Anthos on bare metal is generally available today, with subscription or pay-as-you-go pricing. Let’s dive into the specifics of Anthos on bare metal and also share technical details for how to get started. 

Leverage your existing investments

Anthos on bare metal allows you to leverage existing investments in hardware, OS and networking infrastructure. The minimal system requirement to run Anthos on bare metal at the edge is two nodes with a minimum of 4 cores, and 32 GB RAM, and 128GB of disk space with no specialized hardware. The setup allows you to run Anthos on bare metal on most any infrastructure.

Anthos on bare metal uses a “bring your own operating system” model. It runs atop physical or virtual instances, and supports Red Hat Enterprise Linux 8.1/8.2, CentOS 8.1/8.2, or Ubuntu 18.04/20.04 LTS. Anthos provides overlay networking and L4/L7 load balancing out of the box. You can also integrate with your own load balancer such as F5 and Citrix. For storage, you can deploy persistent workloads using CSI integration with your existing infrastructure.

You can deploy Anthos on bare metal using one of the following deployment models:

  • A standalone model allows you to manage every cluster independently. This is a good choice when running in an edge location or if you want your clusters to be administered independently from on another. 

  • A multi-cluster model allows a central IT team to manage a fleet of clusters from a centralized cluster, called the admin cluster. This is more suitable if you want to build automation, tooling or to delegate the lifecycle of clusters to individual teams without sharing sensitive credentials such as SSH keys or Google Cloud service account details.

Anthos on bare metal Canonical Diagrams.jpg
Click to enlarge

Like with all Anthos environments, a bare metal cluster has a thin, secure connection back to Google Cloud called Connect. Once it’s installed in your clusters, you can centrally view, configure, and monitor your clusters from the Google Cloud Console. 

We’ve been working on Anthos on bare metal with early-access customers and design partners, and their feedback has been overwhelmingly positive. For example, VideoAmp offers a video measurement and optimization platform, and uses Anthos on bare metal to help reduce the operational overhead of managing clusters while also maximizing the utilization of their cloud infrastructure. 

“Here at VideoAmp, we run real-time compute-intensive applications, which enable advertisers to optimize their entire portfolio of linear TV, OTT and digital video to business outcomes. Kubernetes is a critical part of our strategy because of the scalability, portability, and flexibility it provides our developers,” says Hector Sahagun, Director of Engineering at VideoAmp. ”Anthos brings centralized lifecycle and policy management tools, allowing our infrastructure teams to focus on key initiatives instead of the day-to-day management of Kubernetes.” 

Expanding the Anthos Ready Partner Program 

We’re launching Anthos on bare metal with our partners in the Anthos Ready Partner Initiative. The program highlights partner solutions that adhere to Google Cloud’s interoperability requirements and meet the infrastructure and application development needs of enterprise customers running Anthos. These solutions are validated to work across Anthos deployment options including: Anthos on Google Cloud, Anthos on VMware, and Anthos on bare metal.

Atos, Dell Technologies, Equinix Metal, HPE, Intel, NetApp, Nutanix, NVIDIA, and other partners have committed to delivering Anthos on bare metal for their customers’ infrastructure requirements. In addition, our storage partners including Dell Technologies, HPE, NetApp, Portworx, Pure Storage, and are providing shared storage solutions by qualifying their respective CSI drivers for Anthos on bare metal.

Finally, system integrators including Arctiq, Atos, IGNW, SADA, SoftServe, and World Wide Technology can help you get started with Anthos on bare metal with services and solutions to integrate Anthos on bare metal in your environment. 

More workloads from more places, with more ease 

No matter where you run your workloads—in Google Cloud, on-prem, in other clouds or at the edge—Anthos provides a consistent platform on which your teams can quickly build great applications that adapt to an ever-changing world. We developed Anthos to help all organizations to tackle multi-cloud, taking advantage of modern cloud-native technologies like containers, serverless, service mesh, and consistent policy management; both in the cloud or on-premises. Now, with the option of running Anthos on bare metal, there are even more ways to enjoy the benefits of this modern cloud application stack.  

To learn more about Anthos on bare metal, check out this video, from which you’ll learn how to create a cluster and how to deploy your own application on an on-prem cluster. Then, if you’re interested in seeing how Anthos on bare metal can help your organization get hybrid cloud right, reach out to our sales team to schedule an architecture design session.

Read More

Changes to Google Chat group conversations and classic Hangouts coming December 3, 2020

What’s changing

Starting December 3, 2020, we’ll make three changes to how group conversations work in Google Chat:
  • Add and change members. You’ll be able to add and change members of new group conversations.
  • Different Google Vault retention policy. If you have a Vault retention policy set, the updated group conversations will respect a different retention policy.
  • Compatibility with classic Hangouts. Group conversations in Hangouts will begin to appear in Google Chat over the coming weeks.
See below for more information on each of these updates.

Who’s impacted

Admins and end users

Why you’d use it

As we announced earlier this year, starting in the first half of 2021, everyone can begin upgrading from Hangouts to Chat. To ensure a smooth transition, we will help automatically migrate your Hangouts conversations and saved history. These changes further ensure compatibility between classic Hangouts and Chat to make migrating your users as seamless as possible.

Additional details

Updated group conversations
When a new member is added to a group conversation, all members will see a message announcing the new member. The new member will be able to see the entire conversation, even messages sent before they entered, allowing them to catch up easily.

If you have a group conversation created before December 3, you can easily create an updated conversation with the same people using the “Start a new chat” option in the conversation settings menu.

History and retention settings
While end users can toggle history on and off at the conversation level, admins can control whether to keep chat history for users in their organization. They can set the default and also let users change their history setting for each conversation.
Note that these updated group conversations will also respect a different retention policy in Vault. If you set custom Chat retention rules in Google Vault, the scope of coverage will change. Rules set for “All Chat Spaces” (previously known as “All Rooms”) will apply to existing chat rooms, plus updated group messages and group messages that synchronize between Chat and Hangouts. Visit the Help Center for more details.
Compatibility with classic Hangouts
Group conversations in Hangouts—beginning with conversations, followed by message history—will begin to appear in Google Chat over the coming weeks. This will allow your users to move from Hangouts to Chat without losing context. In addition, 1:1 messages, updated group conversations, and unthreaded rooms from Chat will begin to appear in Hangouts (Note: this change will only be available for users with Hangouts enabled).

Getting started

  • Admins: We recommend you review your existing retention rules and evaluate if you need to change them to retain only the message data you want. You can also visit the Help Center to learn more about turning history in Chat on or off for your organization.
  • End users: Membership in these updated group conversations will be editable by default starting December 3, 2020.

Rollout pace

Updated group conversations
Compatibility with classic Hangouts


  • Available to Google Workspace Business Starter, Business Standard, Business Plus, Enterprise Standard, and Enterprise Plus, as well as G Suite Basic, Business, Education, Enterprise for Education, and Nonprofits customers
  • Not available to Google Workspace Essentials and Enterprise Essentials customers


Read More

Irem from Turkey shares her groundbreaking work in TensorFlow and advice for the community

Posted by Jennifer Kohl, Global Program Manager, Google Developer Groups

Irem presenting at a Google Developer Group event

We recently caught up with Irem Komurcu, a TensorFlow developer and researcher at Istanbul Technical University in Turkey. Irem has been a long-serving member of Google Developer Groups (GDG) Düzce and also serves as a Women Techmakers (WTM) ambassador. Her work with TensorFlow has received several accolades, including being named a Hamdi Ulukaya Girişimi fellow. As one one of twenty-four young entrepreneurs selected, she was flown to New York City last year to learn more about business and receive professional development.

With all this experience to share, we wanted you to hear how she approaches pursuing a career in tech, hones her TensorFlow skills with the GDG community, and thinks about how upcoming programmers can best position themselves for success. Check out the full interview below for more.

What inspired you to pursue a career in technology?

I first became interested in tech when I was in high school and went on to study computer engineering. At university, I had an eye-opening experience when I traveled from Turkey to the Google Developer Day event in India. It was here where I observed various code languages, products, and projects that were new to me.

In particular, I saw TensorFlow in action for the first time. Watching the powerful machine learning tool truly sparked my interest in deep learning and project development.

Can you describe your work with TensorFlow and Machine Learning?

I have studied many different aspects of Tensorflow and ML. My first work was on voice recognition and deep learning. However, I am now working as a computer vision researcher conducting various segmentation, object detection, and classification processes with Tensorflow. In my free time, I write various articles about best practices and strategies to leverage TensorFlow in ML.

What has been a useful learning resource you have used in your career?

I kicked off my studies on deep learning on It’s a basic first step, but a powerful one. There were so many blogs, codes, examples, and tutorials for me to dive into. Both the Google Developer Group and TensorFlow communities also offered chances to bounce questions and ideas off other developers as I learned.

Between these technical resources and the person-to-person support, I was lucky to start working with the GDG community while also taking the first steps of my career. There were so many opportunities to meet people and grow all around.

What is your favorite part of the Google Developer Group community?

I love being in a large community with technology-oriented people. GDG is a network of professionals who support each other, and that enables people to develop. I am continuously sharing my knowledge with other programmers as they simultaneously mentor me. The chance for us to collaborate together is truly fulfilling.

What is unique about being a developer in your country/region?

The number of women supported in science, technology, engineering, and mathematics (STEM) is low in Turkey. To address this, I partner with Women Techmakers (WTM) to give educational talks on TensorFlow and machine learning to women who want to learn how to code in my country. So many women are interested in ML, but just need a friendly, familiar face to help them get started. With WTM, I’ve already given over 30 talks to women in STEM.

What advice would you give to someone who is trying to grow their career as a developer?

Keep researching new things. Read everything you can get your eyes on. Technology has been developing rapidly, and it is necessary to make sure your mind can keep up with the pace. That’s why I recommend communities like GDG that help make sure you’re up to date on the newest trends and learnings.

Want to work with other developers like Irem? Then find the right Google Developer Developer Group for you, here.

Read More

AlphaFold: a solution to a 50-year-old grand challenge in biology

In a major scientific advance, the latest version of our AI systemAlphaFoldhas been recognised as a solution to the protein folding problem by the organisers of the biennial Critical Assessment of protein Structure Prediction (CASP).Read More

The U.K.’s top nostalgic films: Access now on Pixel’s 5G

With so many countries now returning to various forms of lock down, and winter steadily drawing in, many of us are turning to our favorite films and movie moments to find some familiarity in a time of uncertainty. 

In other words, we’re embracing nostalgia.

And why not? The movies we love are usually steeped in happy memories, attached to dreamy locations or feature music that temporarily transports us out of the present moment. They bring us joy and a sense of change, breaking up some of the monotony of life in lock down.

We asked Dr. Wing Yee Cheung, a Senior Lecturer in Psychology at the University of Winchester, about this, and learned that films are a great way to relive memories of happier times. “Movies are embedded with sensory memories of when we first watched them and whom we watched them with,” she writes. “Sensory inputs and social interactions aretwo key triggers of nostalgia. Watching these can be a way to walk down memory lane and reminisce [about] the way life used to be, what we used to do, and the people surrounding us.”

And because it’s the season of giving, we have our own gift for you: If you’re in the U.K., you can download classic films, such as “Four Weddings and a Funeral” or “Monty Python’s Life of Brian,” all from a unique Google Map, now until Dec. 10. Transport yourself to a world of nostalgia by searching the map for symbols that represent the films in relevant locations. If you find one, you’ll receive a code to rediscover and enjoy the movie in Google Play*1

Image showing Four Weddings and a Funeral on a Pixel phone.

Anyone in the U.K. can take part, regardless of what type of phone you have—but of course if you do happen to own a new Pixel 5G-enabled device, you’ll be able to start your viewing party in a matter of seconds1. Thanks to movies on-demand combined with the technology of 5G networks2, you can choose your film, download1 it and settle in on the couch, all while the popcorn is still warm. Currently, 5G2 is one of the fastest ways to download a movie on any device. Both Pixel 5 and Pixel 4a with 5G2 enable you to download a film in seconds1. Whether you’re curled up on your sofa, pottering around the house, or outside on a walk, Pixel with 5G2 gives you access to the stories and characters you know and love, on the go; the speed of a 5G2 device immediately transports you to where you want to be.

So let’s lean into the nostalgia. As Dr. Cheung notes, it actually helps us cope with uncertainty: “Immersing ourselves in nostalgic moments is not about hiding our heads in the past. On the contrary, it can create new memories which can feed into future nostalgic experiences.”

It’s a great way to spend lock down with your family: Watching much-loved classics is a natural way for parents to share their experiences with their children and to make new memories together. And even if you’re physically on your own, you can use Google Duo on Pixel 5 to share your screen and watch your favorites with socially distant family and friends3

“An old movie that makes us feel nostalgia can inject us with a complex range of emotions,” concludes Dr. Cheung. “We feel sentimental, predominantly happy, but with a tinge of longing.” And that’s something we can probably all relate to right now. 

*Offer begins on 25th November 25, 2020 and ends 10th December 10, 2020.  Limited number of codes available. Subject to availability. Terms Apply. See here for full terms. 

1.  Testing based on download speeds for content file sizes between 449MB and 749MB at off-peak times. Average download time was twenty seconds or less. Download speed depends upon many factors, such as file size, content provider and carrier network connection and capabilities. Testing conducted by Google on pre-production hardware in the UK in August 2020. Actual download speeds may be slower.  

2. Requires a 5G data plan (sold separately).  5G service and roaming not available on all carrier networks or in all areas and may vary by country. Contact carrier for details about current 5G network performance, compatibility, and availability. Phone connects to 5G networks but, 5G service, speed and performance depend on many factors including, but not limited to, carrier network capabilities, device configuration and capabilities, network traffic, location, signal strength and signal obstruction. Actual results may vary. Some features not available in all areas. Data rates may apply. See for info.

3. Requires a Google Duo account. Screen sharing not available on group calls.  Requires Wi-Fi or 5G internet connection.Not available on all apps and content. Data rates may apply. 5G service, speed and performance depend on many factors including, but not limited to, carrier network capabilities, device configuration and capabilities, network traffic, location, signal strength, and signal obstruction.

*Promotional code offer is provided by Google Commerce Limited (Google) for use on Google  Play Store UK only, and subject to the following terms. Offer begins on 25th November 2020 and ends 10th December, 2020 (‘Offer Period’). One (1) promotional code per user per film release, and up to a maximum of five (5) promotional codes per User during the Offer Period. Limited number of codes available. Subject to availability.

Available only to Users 18 or older with a delivery and billing address in the United Kingdom. Users must have internet access and must have or add a form of payment at checkout . Promotional codes cannot be used with Guest checkout, Users must be signed-in to their Google account to redeem the code. 

Promotional codes can be redeemed by visiting or the Google Play Store app and entering the 16 digit code to receive a £5 or £10 discount for purchase or rental of any product on the Google Play Store UK. The discount will be applied at checkout. Promotional code must be redeemed by 31st December, 2021 or it will expire. Promotional codes may only be used once and may not be used in conjunction with any other offer or promotion. Any unused promotional balance will be applied to the associated Google account. Users may continue to use the unused promotional balance for Google Play purchases until such balance is £0, or any remaining promotional balance expires. Promotional codes are a discount off price for up to the promotional amount, are for one-time use only, cannot be transferred to other users, are not reloadable, cannot be exchanged for cash. Google and its third party partners if applicable, are not liable for lost or stolen promotional codes, or for expired promotional codes that are not redeemed within the redemption period. Terms subject to applicable laws. Void where prohibited.

Read More