Monthly:October 2020
Posted by Jon Harmer, Product Manager, Google Cloud
We recently introduced Google Workspace, which seamlessly brings together messaging, meetings, docs, and tasks and is a great way for teams to create, communicate, and collaborate. Google Workspace has what you need to get anything done, all in one place. This includes giving developers the ability to extend Google Workspace’s standard functionality like with Google Workspace Add-ons, launched earlier this year.
Google Workspace Add-ons, at launch, allowed a developer to build a single integration for Google Workspace that surfaces across Gmail, Google Drive, and Google Calendar. We recently announced that we added to the functionality of Google Workspace Add-ons by enabling more of the Google Workspace applications with the newer add-on framework, Google Docs, Google Sheets, and Google Slides. With Google Workspace Add-ons, developers can scale their presence across multiple touchpoints in which users can engage, and simplifies processes for building and managing add-ons.
One of our early developers for Google Workspace Add-ons has been Adobe. Adobe has been working to integrate Creative Cloud Libraries into Google Workspace. Using Google Workspace Add-ons, Adobe was able to quickly design a Creative Cloud Libraries experience that felt native to Google Workspace. “With the new add-ons framework, we were able to improve the overall performance and unify our Google Workspace and Gmail Add-ons.” said Ryan Stewart, Director of Product Management at Adobe. “This means a much better experience for our customers and much higher productivity for our developers. We were able to quickly iterate with the updated framework controls and easily connect it to the Creative Cloud services.”
One of the big differences between the Gmail integration and the Google Workspace integration is how it lets users work with Libraries. With Gmail, they’re sharing links to Libraries, but with Docs and Slides, they can add Library elements to their document or presentation. So by offering all of this in a single integration, we are able to provide a more complete Libraries experience. Being able to offer that breadth of experiences in a consistent way for users is exciting for our team.
Adobe’s Creative Cloud Libraries API announced at Adobe MAX, was also integral to integrating Creative Cloud with Google Workspace, letting developers retrieve, browse, create, and get renditions of the creative elements in libraries.
Adobe’s new Add-on for Google Workspace lets you add brand colors, character styles and graphics from Creative Cloud Libraries to Google Workspace apps like Docs and Slides. You can also save styles and assets back to Creative Cloud.
With Google Workspace Add-ons, we understand that teams require many applications to get work done, and we believe that process should be simple, and those productivity applications should connect all of a company’s workstreams. With Google Workspace Add-ons, teams can bring their favorite workplace apps like Adobe Creative Cloud into Google Workspace, enabling a more productive day-to-day experience for design and marketing teams. With quick access to Creative Cloud Libraries, the Adobe Creative Cloud Add-on for Google Workspace lets eveyone easily access and share assets in Gmail and apply brand colors, character styles, and graphics to Google Docs and Slides to keep deliverables consistent and on-brand. There’s a phased rollout to users, first with Google Docs, then Slides, so if you don’t see it in the Add-on yet, stay tuned as it is coming soon.
For developers, Google Workspace Add-ons lets you build experiences that not only let your customers manage their work, but also simplify how they work.
To learn more about Google Workspace Add-ons, please visit our Google Workspace developer documentation.
The Healthcare Text Annotation Guidelines are blueprints for capturing a structured representation of the medical knowledge stored in digital text. In order to automatically map the textual insights to structured knowledge, the annotations generated using these guidelines are fed into a machine learning algorithm that learns to systematically extract the medical knowledge in the text. We’re pleased to release to the public the Healthcare Text Annotation Guidelines as a standard.
The guidelines provide a reference for training annotators in addition to explicit blueprints for several healthcare annotation tasks. The annotation guidelines cover the following:
- The task of medical entity extraction with examples from medical entity types like medications, procedures, and body vitals.
- Additional tasks with defined examples, such as entity relation annotation and entity attribute annotation. For instance, the guidelines specify how to relate a medical procedure entity to the source medical condition entity, or how to capture the attributes of a medication entity like dosage, frequency, and route of administration.
- Guidance for annotating an entity’s contextual information like temporal assessment (e.g., current, family history, clinical history), certainty assessment (e.g., unlikely, somewhat likely, likely), and subject (e.g., patient, family member, other).
Google consulted with industry experts and academic institutions in the process of assembling the Healthcare Text Annotation Guidelines. We took inspiration from other open source and research projects like i2b2 and added context to the guidelines to support information extraction needs for industry-applications like Healthcare Effectiveness Data and Information Set (HEDIS) quality reporting. The data types contained in the Healthcare Text Annotation Guidelines are a common denominator across information extraction applications. Each industry application can have additional information extraction needs that are not captured in the current version of the guidelines. We chose to open source this asset so the community can tailor this project to their needs.
We’re thrilled to open source this project. We hope the community will contribute to the refinement and expansion of the Healthcare Text Annotation Guidelines, so they mirror the ever-evolving nature of healthcare.
It’s time for another update to our Stadia Savepoint series, recapping the new games, features, and changes for Stadia in October.
This month we celebrated some Good Stuff on Stadia, teaming up with YouTube Creators Lamarr Wilson and LaurenZside who revealed free and exclusive week-long demos for PAC-MAN™ Mega Tunnel Battle and Immortals Fenyx Rising, plus an OpenDev Beta for HUMANKIND. We can’t wait for these amazing games to launch on Stadia, starting with PAC-MAN Mega Tunnel Battle on November 17. Over three days we also revealed new games and content coming to Stadia including the Drastic Steps expansion for Orcs Must Die! 3 (Nov. 6), Star Wars Jedi: Fallen Order (Nov. 24), ARK: Survival Evolved (early 2021), Hello Engineer (2021), Young Souls (2021), and Phoenix Point (2021).
Throughout October, players explored a Dungeons & Dragons adventure in Baldur’s Gate 3 Early Access, fought against a surveillance state in Watch Dogs: Legion, and carved their path to vengeance in Sekiro: Shadows Die Twice. All of these games, plus many others that arrived this month, are now available for purchase on the Stadia store. For players that are subscribed to Stadia Pro, they received instant access to a library of 29 games in October, with even more available on November 1.

Crowd Choice now available
Crowd Choice, available in Baldur’s Gate 3 and Dead by Daylight, changes how games unfold when live streaming on YouTube. Viewers have the power to vote on decisions made by the player in each game.
Play Stadia with mobile data
Mobile data gameplay has graduated from Experiments and is now a fully supported feature on Stadia to play games using 4G and 5G. Data usage may use up to 2.7GB/hr. Gameplay is service-, network-, and connection-dependent, and this feature may not be available in all areas.
Referral rewards for friends and family
Refer someone for a free trial of Stadia Pro and they’ll get an extra free month of games. Plus, if they subscribe after their trial is up, you’ll get an extra free month of Stadia Pro as well. Terms apply.
Push notifications on mobile
Receive notifications in the Stadia app on Android and iOS devices about Stadia Pro games, incoming friend requests, and more.
Stadia Pro updates
- Claim six new games for free with Stadia Pro in November: Sniper Elite 4, Risk of Rain 2, The Gardens Between, Hello Neighbor: Hide & Seek, République, and Sundered: Eldritch Edition.
- Twenty-nine existing games are still available to add to your Stadia Pro collection: Destiny 2: The Collection, PLAYERUNKNOWN’S BATTLEGROUNDS, SteamWorld Quest: Hand of Gilgamech, SteamWorld Dig, SteamWorld Dig 2, SteamWorld Heist, GYLT, Little Nightmares, Power Rangers: Battle for the Grid, SUPERHOT, Panzer Dragoon Remake, Crayta, West of Loathing, Orcs Must Die! 3, Strange Brigade, Just Shapes & Beats, Rock of Ages 3: Make & Break, Super Bomberman R Online, Gunsport, Hitman, Hello Neighbor, Metro: Last Light Redux, Embr Early Access, Dead by Daylight, Human Fall Flat, SUPERHOT: MIND CONTROL DELETE, Lara Croft: Temple of Osiris, Celeste, and Jotun: Valhalla Edition.
- Act quickly: It’s your last chance to add Just Shapes & Beats, Metro Last Light Redux, Strange Brigade, and West of Loathing to your Stadia Pro collection before November 1.
- There are still ongoing discounts for both Stadia Pro subscribers and all players – check out the web or mobile Stadia store for the latest.
October content launches on Stadia
- Baldur’s Gate 3 Early Access
- Cake Bash
- Crayta – Dark Circus
- Dead by Daylight
- HUMANKIND OpenDev Beta
- Human: Fall Flat
- Immortals Fenyx Rising Demo
- PAC-MAN™ Mega Tunnel Battle Demo
- PLAYERUNKNOWN’S BATTLEGROUNDS – Season 9
- Sekiro: Shadows Die Twice
- Watch Dogs: Legion
New games coming to Stadia announced this month
- ARK: Survival Evolved
- Assassin’s Creed Origins
- Assassin’s Creed Syndicate
- Assassin’s Creed Unity
- Far Cry New Dawn
- Far Cry 5
- HUMANKIND
- Hello Engineer
- Orcs Must Die! 3 – Drastic Steps
- PAC-MAN™ Mega Tunnel Battle
- Phoenix Point
- Tom Clancy’s Ghost Recon® Wildlands
- Young Souls
- Watch Dogs
- Watch Dogs 2
That’s all for October—we’ll be back soon to share more updates. As always, stay tuned to the Stadia Community Blog, Facebook and Twitter for the latest news.
A few weeks ago, Google introduced the Pixel 4a (5G) and the Pixel 5. And new Pixels mean new Pixel accessories, starting with the new Made by Google cases.
As part of Google’s ongoing commitment to sustainability, the outer fabric of our new cases are made with 70 percent recycled materials. In fact, assuming each bottle contains about nine grams of plastic, two recycled plastic bottles can provide enough knitted outer fabric for five cases.
We did all of this while delivering a pop of color. In addition to the Blue Confetti, Static Gray and Basically Black, we’re adding two new colors: Green Chameleon for the Pixel 5 and Chili Flakes for Pixel 4a (5G).
-
Pixel 5 Case Sustainability
-
pixel case
-
Pixel 5 Case
Cases are only the beginning, though. How you outfit your phone says a lot about you, so we decided to find out what different members of the Pixel team are using in order to get some accessory inspiration.
Nicole Laferriere, Pixel Program Manager
No more battery anxiety! The iOttie iON Duo is my perfect WFH companion because it allows me to simultaneously wirelessly charge my Pixel 5 and Pixel Buds. The stand provides a great angle so I never miss a notification and charges my Pixel quickly. And I love the custom Pixel Bud-shaped charging pad because it fits them so perfectly, and there’s no waiting to see if the device starts charging.
Ocie Henderson, Digital Store Certification Lead
What’s your favorite accessory and why?: I love the Power Support CLAW for Stadia because it’s my favorite way to game-on-the-go. 2020 has definitely impacted the amount of places I can go, of course, and the places I’m able to eat. Fortunately the drive-thru is still an option, and my Power Support CLAW can sit atop my Stadia Controller and transform my wait into an opportunity to adventure, too.
Helen Hui, Technical Program Manager
Moment lenses are my go-to accessory whenever I go hiking. With the lens on Pixel phones, I can skip the heavy digital camera and still achieve stunning results. Last December, I used the Moment Telephoto 58mm Lens and my Pixel 4 to capture stunning photos of Antelope Canyon in Arizona. I can’t wait to try the new Moment case for Pixel 5.
Janelle Stribling, Pixel Product Marketing
When I’m not working, I’m always on the go—I especially love discovering new hiking trails, so my must-have accessory is my iOttie wireless car charger. I can attach my Pixel 5 with one hand and then I’m hands-free the rest of the drive since I can use Google Assistant and Google Maps to find my destination. I love arriving with a full battery so I can start capturing photos of the views immediately!
Nasreen Shad, Pixel Product Manager
Now more than ever, I like starting off each day working from home with a morning routine with my Pixel Stand. I keep it on my nightstand and use the Sunrise Alarm to gradually brighten my phone’s screen for a gentle wake up. With the new home controls, I can easily change my thermostat settings and turn on my living room lights before even getting out of bed. Once I’m up and at it, Google Assistant gives me daily briefing of headlines from my favorite news outlets And lucky for me, my San Francisco apartment is small enough that I can leave my Pixel on the Pixel Stand and play some music while I get warmed up for a morning jog.
Posted by Chet Haase
It’s a Wrap!
We’ve just finished the first series in the MAD Skills series of videos and articles on Modern Android Development. This time, the topic was Navigation component, the API and tool that helps you create and edit navigation paths through your application.
The great thing about videos and articles is that, unlike performance art, they tend to stick around for later enjoyment. So if you haven’t had a chance to see these yet, check out the links below to see what we covered. Except for the Q&A episode at the end, each episode has essentially identical content in the video and article version, so use whichever format you prefer for content consumption.
Episode 1: Overview
The first episode provides a quick, high-level overview of Navigation Component, including how to create a new application with navigation capability (using Android Studio’s handy application templates), details on the containment hierarchy of a navigation-enabled UI, and an explanation of some of the major APIs and pieces involved in making Navigation Component work.
Or in article form: https://medium.com/androiddevelopers/navigation-component-an-overview-4697a208c2b5
Episode 2: Dialog Destinations
Episode 2 explores how to use the API to navigate to dialog destinations. Most navigation takes place between different fragment destinations, which are swapped out inside of the NavHostFragment
object in the UI. But it is also possible to navigate to external destinations, including dialogs, which exist outside of the NavHostFragment
.
Or in article form: https://medium.com/androiddevelopers/navigation-component-dialog-destinations-bfeb8b022759
Episode 3: SafeArgs
This episode covers SafeArgs, the facility provided by Navigation component for easily passing data between destinations.
Or in article form: https://medium.com/androiddevelopers/navigating-with-safeargs-bf26c17b1269
Episode 4: Deep Links
This episode is on Deep Links, the facility provided by Navigation component for helping the user get to deeper parts of your application from UI outside the application.
Or in article form: https://medium.com/androiddevelopers/navigating-with-deep-links-910a4a6588c
Episode 5: Live Q&A
Finally, to wrap up the series (as we plan to do for future series), I hosted a Q&A session with Ian Lake. Ian fielded questions from you on Twitter and YouTube, and we discussed everything from feature requests like multiple backstacks (spoiler: it’s in the works!) to Navigation support for Jetpack Compose (spoiler: the first version of this was just released!) to other questions people had about navigation, fragments, Up-vs-Back, saving state, and other topics. It was pretty fun — more like a podcast with cameras than a Q&A.
(There is no article for this one; enjoy the video above)
Sample App: DonutTracker
The application used for most of the episodes above is DonutTracker, an app that you can use for tracking important data about donuts you enjoy (or don’t). Or you can just use it for checking out the implementation details of these Navigation features; your choice.
Posted by Tingbo Hou and Tyler Mullen, Software Engineers, Google Research
Video conferencing is becoming ever more critical in people’s work and personal lives. Improving that experience with privacy enhancements or fun visual touches can help center our focus on the meeting itself. As part of this goal, we recently announced ways to blur and replace your background in Google Meet, which use machine learning (ML) to better highlight participants regardless of their surroundings. Whereas other solutions require installing additional software, Meet’s features are powered by cutting-edge web ML technologies built with MediaPipe that work directly in your browser — no extra steps necessary. One key goal in developing these features was to provide real-time, in-browser performance on almost all modern devices, which we accomplished by combining efficient on-device ML models, WebGL-based rendering, and web-based ML inference via XNNPACK and TFLite.
![]() |
Background blur and background replacement, powered by MediaPipe on the web. |
Overview of Our Web ML Solution
The new features in Meet are developed with MediaPipe, Google’s open source framework for cross-platform customizable ML solutions for live and streaming media, which also powers ML solutions like on-device real-time hand, iris and body pose tracking.
A core need for any on-device solution is to achieve high performance. To accomplish this, MediaPipe’s web pipeline leverages WebAssembly, a low-level binary code format designed specifically for web browsers that improves speed for compute-heavy tasks. At runtime, the browser converts WebAssembly instructions into native machine code that executes much faster than traditional JavaScript code. In addition, Chrome 84 recently introduced support for WebAssembly SIMD, which processes multiple data points with each instruction, resulting in a performance boost of more than 2x.
Our solution first processes each video frame by segmenting a user from their background (more about our segmentation model later in the post) utilizing ML inference to compute a low resolution mask. Optionally, we further refine the mask to align it with the image boundaries. The mask is then used to render the video output via WebGL2, with the background blurred or replaced.
![]() |
WebML Pipeline: All compute-heavy operations are implemented in C++/OpenGL and run within the browser via WebAssembly. |
In the current version, model inference is executed on the client’s CPU for low power consumption and widest device coverage. To achieve real-time performance, we designed efficient ML models with inference accelerated by the XNNPACK library, the first inference engine specifically designed for the novel WebAssembly SIMD specification. Accelerated by XNNPACK and SIMD, the segmentation model can run in real-time on the web.
Enabled by MediaPipe’s flexible configuration, the background blur/replace solution adapts its processing based on device capability. On high-end devices it runs the full pipeline to deliver the highest visual quality, whereas on low-end devices it continues to perform at speed by switching to compute-light ML models and bypassing the mask refinement.
Segmentation Model
On-device ML models need to be ultra lightweight for fast inference, low power consumption, and small download size. For models running in the browser, the input resolution greatly affects the number of floating-point operations (FLOPs) necessary to process each frame, and therefore needs to be small as well. We downsample the image to a smaller size before feeding it to the model. Recovering a segmentation mask as fine as possible from a low-resolution image adds to the challenges of model design.
The overall segmentation network has a symmetric structure with respect to encoding and decoding, while the decoder blocks (light green) also share a symmetric layer structure with the encoder blocks (light blue). Specifically, channel-wise attention with global average pooling is applied in both encoder and decoder blocks, which is friendly to efficient CPU inference.
![]() |
Model architecture with MobileNetV3 encoder (light blue), and a symmetric decoder (light green). |
We modified MobileNetV3-small as the encoder, which has been tuned by network architecture search for the best performance with low resource requirements. To reduce the model size by 50%, we exported our model to TFLite using float16 quantization, resulting in a slight loss in weight precision but with no noticeable effect on quality. The resulting model has 193K parameters and is only 400KB in size.
Rendering Effects
Once segmentation is complete, we use OpenGL shaders for video processing and effect rendering, where the challenge is to render efficiently without introducing artifacts. In the refinement stage, we apply a joint bilateral filter to smooth the low resolution mask.
![]() |
Rendering effects with artifacts reduced. Left: Joint bilateral filter smooths the segmentation mask. Middle: Separable filters remove halo artifacts in background blur. Right: Light wrapping in background replace. |
The blur shader simulates a bokeh effect by adjusting the blur strength at each pixel proportionally to the segmentation mask values, similar to the circle-of-confusion (CoC) in optics. Pixels are weighted by their CoC radii, so that foreground pixels will not bleed into the background. We implemented separable filters for the weighted blur, instead of the popular Gaussian pyramid, as it removes halo artifacts surrounding the person. The blur is performed at a low resolution for efficiency, and blended with the input frame at the original resolution.
![]() |
Background blur examples. |
For background replacement, we adopt a compositing technique, known as light wrapping, for blending segmented persons and customized background images. Light wrapping helps soften segmentation edges by allowing background light to spill over onto foreground elements, making the compositing more immersive. It also helps minimize halo artifacts when there is a large contrast between the foreground and the replaced background.
![]() |
Background replacement examples. |
Performance
To optimize the experience for different devices, we provide model variants at multiple input sizes (i.e., 256×144 and 160×96 in the current release), automatically selecting the best according to available hardware resources.
We evaluated the speed of model inference and the end-to-end pipeline on two common devices: MacBook Pro 2018 with 2.2 GHz 6-Core Intel Core i7, and Acer Chromebook 11 with Intel Celeron N3060. For 720p input, the MacBook Pro can run the higher-quality model at 120 FPS and the end-to-end pipeline at 70 FPS, while the Chromebook runs inference at 62 FPS with the lower-quality model and 33 FPS end-to-end.
Model | FLOPs | Device | Model Inference | Pipeline |
256×144 | 64M | MacBook Pro 18 | 8.3ms (120 FPS) | 14.3ms (70 FPS) |
160×96 | 27M | Acer Chromebook 11 | 16.1ms (62 FPS) | 30ms (33 FPS) |
Model inference speed and end-to-end pipeline on high-end (MacBook Pro) and low-end (Chromebook) laptops. |
For quantitative evaluation of model accuracy, we adopt the popular metrics of intersection-over-union (IOU) and boundary F-measure. Both models achieve high quality, especially for having such a lightweight network:
Model | IOU | Boundary F-measure |
256×144 | 93.58% | 0.9024 |
160×96 | 90.79% | 0.8542 |
Evaluation of model accuracy, measured by IOU and boundary F-score. |
We also release the accompanying Model Card for our segmentation models, which details our fairness evaluations. Our evaluation data contains images from 17 geographical subregions of the globe, with annotations for skin tone and gender. Our analysis shows that the model is consistent in its performance across the various regions, skin-tones, and genders, with only small deviations in IOU metrics.
Conclusion
We introduced a new in-browser ML solution for blurring and replacing your background in Google Meet. With this, ML models and OpenGL shaders can run efficiently on the web. The developed features achieve real-time performance with low power consumption, even on low-power devices.
Acknowledgments
Special thanks to the people who worked on this project, in particular Sebastian Jansson, Rikard Lundmark, Stephan Reiter, Fabian Bergmark, Ben Wagner, Stefan Holmer, Dan Gunnarson, Stéphane Hulaud and to all our team members who worked on the technology with us: Siargey Pisarchyk, Karthik Raveendran, Chris McClanahan, Marat Dukhan, Frank Barchard, Ming Guang Yong, Chuo-Ling Chang, Michael Hays, Camillo Lugaresi, Gregory Karpiak, Siarhei Kazakou, Matsvei Zhdanovich, and Matthias Grundmann.
What’s changing
Who’s impacted
Why you’d use it
Additional details
Getting started
Rollout pace
- Rapid Release domains: Gradual rollout to eligible devices (up to 7 days for feature visibility) starting on October 30, 2020
- Scheduled Release domains: Gradual rollout to eligible devices (up to 7 days for feature visibility) starting on November 6, 2020
Availability
- Available to Essentials, Business Starter, Business Standard, Business Plus, Enterprise Essentials, Enterprise Standard, Enterprise Plus, Enterprise for Education, and Nonprofits customers and users with personal Google accounts.
- Selecting your own picture is not available to participants of meetings organized by Education customers.
Resources
Roadmap
- This feature was listed as an upcoming release.
A machine learning workflow can involve many steps with dependencies on each other, from data preparation and analysis, to training, to evaluation, to deployment, and more. It’s hard to compose and track these processes in an ad-hoc manner—for example, in a set of notebooks or scripts—and things like auditing and reproducibility become increasingly problematic.
Cloud AI Platform Pipelines, which was launched earlier this year, helps solve these issues: AI Platform Pipelines provides a way to deploy robust, repeatable machine learning pipelines along with monitoring, auditing, version tracking, and reproducibility, and delivers an enterprise-ready, easy to install, secure execution environment for your ML workflows.
While the Pipelines Dashboard UI makes it easy to upload, run, and monitor pipelines, you may sometimes want to access the Pipelines framework programmatically. Doing so lets you build and run pipelines from notebooks, and programmatically manage your pipelines, experiments, and runs. To get started, you’ll need to authenticate to your Pipelines installation endpoint. How you do that depends on the environment in which your code is running. So, today, that’s what we’ll focus on.
Event-triggered Pipeline calls
One interesting class of use cases that we’ll cover is using the SDK with a service like Cloud Functions to set up event-triggered Pipeline calls. These allow you to kick off a deployment based on new data added to a GCS bucket, new information added to a PubSub topic, or other events.

With AI Platform Pipelines, you specify a pipeline using the Kubeflow Pipelines (KFP) SDK, or by customizing the TensorFlow Extended (TFX) Pipeline template with the TFX SDK. To connect using the SDK from outside the Pipelines cluster, your credentials must be set up in the remote environment to give you access to the endpoint of the AI Platform Pipelines installation. In many cases, where it’s straightforward to install and initialize gcloud
for your account (or it’s already set up for you, as is the case with AI Platform Notebooks), connection is transparent.
Alternatively, if you are running on Google Cloud, in a context where it is not straightforward to initialize gcloud
, you can authenticate by obtaining and using an access token via the underlying VM’s metadata. If that runtime environment is using a different service account than the one used by the Pipelines installation, you’ll also need to give that service account access to the Pipelines endpoint. This scenario is the case, for example, with Cloud Functions, whose instances use the project’s App Engineservice account.
Finally, if you are not running on Google Cloud, and gcloud
is not installed, you can use a service account credentials file to generate an access token.
We’ll describe these options below, and give an example of how to define a Cloud Function that initiates a pipeline run, allowing you to set up event-triggered Pipeline jobs.
Using the Kubeflow Pipelines SDK to connect to an AI Platform Pipelines cluster via gcloud
access
To connect to an AI Platform Pipelines cluster, you’ll first need to find the URL of its endpoint.
An easy way to do this is to visit your AI Pipelines dashboard, and click on SETTINGS.

A window will pop up that looks similar to the following:

Copy the displayed code snippet to connect using your installation’s endpoint using the KFP SDK. This simple notebook example lets you test the process. (Here is an example that uses the TFX SDK and TFX Templates instead).
Connecting from AI Platform Notebooks
If you’re using an AI Platform Notebook running in the same project, connectivity will just work. All you need to do is provide the URL for the endpoint of your Pipelines installation, as described above.
Connecting from a local or development machine
You might instead want to deploy to your Pipelines installation from your local machine or other similar environments. If you have gcloud installed and authorized for your account, authentication should again just work.
Connecting to the AI Platform Pipelines endpoint from a GCP runtime
For serverless environments like Cloud Functions, Cloud Run, or App Engine, with transitory instances that use a different service account, it can be problematic to set up and initialize gcloud. Here we’ll use a different approach: we’ll allow the service account to access Cloud AI Pipelines’ inverse proxy, and obtain an access token that we pass when creating the client object. We’ll walk through how to do this with a Cloud Functions example.
Example: Event-triggered Pipelines deployment using Cloud Functions
Cloud Functions is Google Cloud’s event-driven serverless compute platform. Using Cloud Functions to trigger a pipeline deployment opens up many possibilities for supporting event-triggered pipelines, where you can kick off a deployment based on new data added to a Google Cloud Storage bucket, new information added to a PubSub topic, and so on.
For example, you might want to automatically kick off an ML training pipeline run once a new batch of data has arrived, or an AI Platform Data Labeling Service “export” finishes.
Here, we’ll look at an example where deployment of a pipeline is triggered by the addition of a new file to a Cloud Storage bucket.
For this scenario, you probably don’t want to set up a Cloud Functions trigger on the Cloud Storage bucket that holds your dataset, as that would trigger each time a file was added—probably not the behavior you want, if updates include multiple files. Instead, upon completion of the data export or ingestion process, you could write a Cloud Storage file to a separate “trigger bucket”, where the file contains information about the path to the newly added data. A Cloud Functions function defined to trigger on that bucket could read the file contents and pass the information about the data path as a param when launching the pipeline run.
There are two primary steps to setting up a Cloud Functions function to deploy a pipeline. The first is giving the service account used by Cloud Functions—your project’s App Engine service account—access to the service account used by the Pipelines installation, by adding it as a Member with Project Viewer privileges. By default, the Pipelines service account will be your project’s Compute Engine default service account.
Then, you define and deploy a Cloud Functions function that kicks off a pipeline run when triggered. The function obtains an access token for the Cloud Functions instance’s service account, and this token is passed to the KFP client constructor. Then, you can kick off the pipeline run (or make other requests) via the client object.
Information about the triggering a Cloud Storage file or its contents can be passed as a pipeline runtime parameter.
Because the Cloud Function needs to have the kfp
SDK installed, you will need to define a requirements.txt
file used by the Cloud Functions deployment that specifies this.
This notebook walks you through the process of setting this up, and shows the Cloud Functions function code. The example defines a very simple pipeline that just echoes a file name passed as a parameter. The Cloud Function launches a run of that pipeline, passing the name of the new or modified file that triggered the Function call.
Connecting to the Pipelines endpoint using a service account credentials file
If you’re developing locally and don’t have gcloud
installed, you can also obtain a credentials token via a locally-available service account credentials file. This example shows how to do that. It’s most straightforward to use credentials for the same service account as the one used for the Pipelines installation—by default the Compute Engine service account. Otherwise, you will need to give your alternative service account access to the Compute Engine account.
Summary
There are several ways you can use the AI Platform Pipelines API to remotely deploy pipelines, and the notebooks we introduced here should give you a great head start. Cloud Functions, in particular, lets you support many types of event-triggered pipelines. To learn more about putting this into practice, check out the Cloud Functions notebook for an example of how to automatically launch a pipeline run on new data. Give these notebooks a try, and let us know what you think! You can reach me on Twitter at @amygdala.
Established companies in any industry, including telecommunications, often take a cautious approach when it comes to adopting new technology. This is true of any technology and especially of tools used across entire organizations and at all levels—like collaboration and communication tools. I joined Optiva Inc. as its Chief Marketing Officer in August 2018, and right off the bat I faced the need for cloud-based productivity and collaboration tools like Google Workspace (formerly G Suite), which was new to me.
Optiva was in the midst of making an organization-wide change to implement Google Workspace productivity and collaboration tools. Employees at that time—at Optiva and across other organizations I used to work with—primarily collaborated by attaching documents to company emails and meeting their colleagues in office corridors. Meetings were almost always in person or via phone. These processes were inefficient and contributed to version-control confusion and decreased overall productivity. They were also based on our pre-COVID world.
Today, Optiva is a fully remote-first organization, and our employees use all the tools in Google Workspace. Collaboration has increased dramatically, surprising even initial skeptics. Occasionally we still encounter resistance from new employees who are used to a different way of working. This is logical, but more often than not, they thank us later, and many have told me that they see the value for their productivity and the entire organization’s collaboration.
Google Workspace has become a real advantage, especially for a global company like Optiva with employees across the world and in so many different time zones. Instead of waiting for team members’ availability to review or collaborate on a document, wasting time coordinating who works on the latest version, or who is the “owner” of a deck, employees can open a document, view new changes, and add updates, even when others are working on the same document in parallel.
Google Workspace promotes business as usual during COVID-19
The combination of Optiva’s remote-first policy and Google Workspace collaboration tools proved a welcome addition during the challenges posed by COVID-19. Optiva was ahead of the curve of remote working policies in the telecommunications industry and in the software industry as a whole, so when the work-at-home orders went into effect in March, Optiva’s business charged ahead without missing a beat.
Google Meet has helped us maintain a strong corporate culture during this challenging time. For example, we constantly use video conferencing, which has been a great way to see the entire team and get a real sense for how each employee is doing—professionally as well as personally. Using Google Chat allowed us to keep in touch with everyone across our global teams. It helps us ensure that everyone feels secure and can work during this challenging time. Also, we have continued to create new teams with employees all around the world. If we attempted to onboard or train them using conference calls or emails, the experience wouldn’t be the same. Meet has helped us develop the strongest teams possible, faster, and much more effectively.
We also use Google Workspace tools to cross-train employees on tasks and processes in departments outside their own. That way, if one of our employees has to be out of the office unexpectedly, the company benefits from full business continuity. To make this possible, various teams and departments documented their specific processes, held training sessions on Meet, and recorded them so any employee could access them on demand.
Many struggled in the early days of the self-quarantine orders, especially when it came to the sudden need to select and deploy new technologies for remote collaboration and, more importantly, to keep supporting their customers. We were fortunate that our business was unphased—and we owe a lot of this success to Google Workspace.
Available today, the new Nest Thermostat is smarter and more affordable than ever. By using AI, it keeps homes comfortable while helping people save energy and even find out if something might be wrong with their eligible HVAC system.
To learn a little more about what powers the Nest Thermostat, we took some time to talk to Marco Bonvini and Ramya Bhagavatula, software engineers on the Nest team.
The Nest Thermostat has used AI since the beginning. What’s different about this latest launch?
Ramya: We really focused on what the experience would be like for people. Nest thermostats have always been really sophisticated, and with the new Nest Thermostat we really wanted to put more control in peoples’ hands. They’re able to label their temperature settings: “This is my comfort zone, this is the temperature I like it to be when I’m sleeping.” We’re using people’s preferences and adding machine learning to find you ways to help save energy. If you enable Savings Finder, it will recommend minor changes to your set temperatures or schedule to help you save; if it looks good to you, you press “yes.” It takes away all the mystery.
Also, from the very start, we knew that with a smart thermostat we should have the ability to figure out when something might be going wrong with your HVAC system. Now, we’re taking steps to make that possible for most systems in Canada and the U.S. with HVAC monitoring, which is rolling out today to all Nest thermostats in those regions.
Where did the idea for the HVAC monitoring feature come from?
Marco: It started two years ago, as a side project. The first question was “is this going to be valuable for people?” and the answer was “yes.” When our customers had an HVAC issue, they would call us assuming there was something wrong. We were trying to help them troubleshoot and connect them to a Nest Pro, but we wanted to do it more proactively. That led us to the second question, “can we do this?” and the answer was also, “yes, we can do this.” Moving forward, we should be able to provide even more context, so it will help people, and pros, even more. We already saw improvements since we launched the beta earlier this year, so we’re really encouraged to provide more proactive help to customers.
What made this possible?
Ramya: Cloud computing advancements, definitely. We used to run a lot of algorithms on the device, that’s what got Nest started. Now, with cloud computing, we can aggregate data anonymously from Nest thermostats to inform what sort of actions we take and what we can suggest to owners. This helps inform features like Savings Finder and HVAC monitoring.
Originally, each thermostat operated on its own, but now we have the power to make intelligent decisions based on anonymized data, which might not have been possible if we were just looking at each individual device.
How does a smart thermostat find possible HVAC issues?
Marco: We monitor the estimated ambient and target temperature and predict time to temperature. We have predicted the expected behavior and then look for anomalies which may be potential performance issues with the HVAC system.

What do you most enjoy about working on Nest thermostats?
Marco: Something that’s unique is that we’re providing these new features to legacy, older devices. In a lot of ways, Nest thermostats started the IoT category; the original is 10-years-old and it’s still running. While the thermostat has changed over the years, we’re committed to supporting everyone and all devices with compatible systems. An IoT device that’s 10-years-old that still gets new feature releases is pretty special.