Monthly:January 2021

Out of office information will now display when replying to or mentioning a user in a Google Docs comment

Quick launch summary

In Google Docs, you’ll now see out of office information when replying to or mentioning other users in a comment.

When mentioning a single user in a new comment or thread, you’ll see the OOO banner and information on when they plan to return.

For multi-person threads, you’ll see condensed out of office information. You can select the info icon to view more information on each specific person. 

Getting started

Rollout pace

Availability 

  • Available to Google Workspace Essentials, Business Starter, Business Standard, Business Plus, Enterprise Essentials, Enterprise Standard, and Enterprise Plus, as well as G Suite Basic, Business, Education, Enterprise for Education, and Nonprofits customers 

Resources 

Read More

New white paper: Strengthening operational resilience in financial services by migrating to Google Cloud

Operational resilience continues to be a key focus for financial services firms. Regulators from around the world are refocusing supervisory approaches on operational resilience to support the soundness of financial firms and the stability of the financial ecosystem. Our new white paper discusses the continuing importance of operational resilience to the financial services sector, and the role that a well-executed migration to Google Cloud can play in strengthening it. Here are the key highlights: 

Operational resilience in financial services

Financial services firms and regulators are increasingly focused on operational resilience, reflecting the growing dependency that the financial services industry has on complex systems, automation and technology, and third parties. 

Operational resilience can be defined as the “ability to deliver operations, including critical operations and core business lines, through a disruption from any hazard”1. Given this definition, operational resilience needs to be thought of as a desired outcome, instead of a singular activity, and as such, the approach to achieving that outcome needs to address a multitude of operational risks including: 

  • Cybersecurity: Continuously adjusting key controls, people, processes and technology to prevent, detect and react to external threats and malicious insiders.

  • Pandemics: Sustaining business operations in scenarios where people cannot, or will not, work in close proximity to colleagues and customers.

  • Environmental and Infrastructure: Designing and locating facilities to mitigate the effects of localised weather and infrastructure events, and to be resilient to physical attacks.

  • Geopolitical: Understanding and managing risks associated with geographic and political boundaries between intragroup and third-party dependencies.

  • Third-party Risk: Managing supply chain risk, and in particular of critical outsourced functions by addressing vendor lock in, survivability and portability.

  • Technology Risk: Designing and operating technology services to provide the required levels of availability, capacity, performance, quality and functionality. 

Operational resilience benefits from migrating to Google Cloud

There is a growing recognition among policymakers and industry leaders that, far from creating unnecessary new risk, a well-executed migration to public cloud technology over the coming years will provide capabilities to financial services firms that will enable them to strengthen operational resilience in ways that are not otherwise achievable.  

Foundationally, Google Cloud’s infrastructure and operating model is of a scale and robustness that can provide financial services customers a way to increase their resilience in a highly commercial way.

Equally important are the Google Cloud products, and our support for hybrid and multi-cloud, that help financial services customers manage various operational risks in a differentiated manner:

  • Cybersecurity that is designed in, and from the ground up. From encryption by default, to our Titan security chip, to high-scale DOS defences, to the power of Google Cloud data analytics and Security Command Center our solutions help you secure your environment.

  • Solutions that decouple employees and customers from physical offices and premises. This includes zero-trust based remote access that removes the need for complex VPNs, rapidly deployed customer contact center AI virtual agents, and Google Workspace for best-in-class workforce collaboration.

  • Globally and regionally resilient infrastructure, data centers and support. We offer a global footprint of 24 regions and 73 zones allowing us to serve customers in over 200 countries, with a globally distributed support function so we can support customers even in adverse circumstances.

  • Strategic autonomy through appropriate controls. Our recognition that customers and policymakers, particularly in Europe, strive for even greater security and autonomy is embodied in our work on data sovereignty, operational sovereignty, and software sovereignty.

  • Portability, substitutability and survivability, using our open cloud. We understand that from a financial services firm’s perspective, achieving operational resilience may include solving for situations where their third parties are unable, for any reason, to provide the services contracted.

  • Reducing technical debt, whilst focusing on great financial products and services. We provide a portfolio of solutions so that financial services firms’ technology organisations can focus on delivering high-quality services and experiences to customers, and not on operating foundational technologies such as servers, networks and mainframes.

We are committed to ensuring that Google Cloud solutions for financial services are designed in a manner that best positions the sector in all aspects of operational resilience. Furthermore, we recognize that this is not simply about making Google Cloud resilient: the sector needs autonomy, sovereignty and survivability. You can learn more about Google Cloud’s point of view on operational resilience in financial services by downloading the white paper.


1. “Sound Practices to Strengthen Operational Resilience”, FRB, OCC, FDIC

Read More

Farmers Edge enlists Google Cloud to bring AI/ML and more to the AgTech space

Farmers Edge shared the launch of a global strategic co-selling initiative, with the goal to bring AI, ML, and analytics to more customers across the AgTech space. Enhancements to their digital platform – FarmCommand® – will include near real-time carbon and sustainability tracking, simplified insurance reporting and claims management. These will not only help accelerate the rate at which organisations are able to move to the cloud, but it will allow them to do so while minimizing costs and maximizing sustainability.  

Learn more about our latest efforts in the Farmers Edge press release here!

Stabilizing Live Speech Translation in Google Translate

Posted by Naveen Arivazhagan, Senior Software Engineer and Colin Cherry, Staff Research Scientist, Google Research

The transcription feature in the Google Translate app may be used to create a live, translated transcription for events like meetings and speeches, or simply for a story at the dinner table in a language you don’t understand. In such settings, it is useful for the translated text to be displayed promptly to help keep the reader engaged and in the moment.

However, with early versions of this feature the translated text suffered from multiple real-time revisions, which can be distracting. This was because of the non-monotonic relationship between the source and the translated text, in which words at the end of the source sentence can influence words at the beginning of the translation.

Transcribe (old) — Left: Source transcript as it arrives from speech recognition. Right: Translation that is displayed to the user. The frequent corrections made to the translation interfere with the reading experience.

Today, we are excited to describe some of the technology behind a recently released update to the transcribe feature in the Google Translate app that significantly reduces translation revisions and improves the user experience. The research enabling this is presented in two papers. The first formulates an evaluation framework tailored to live translation and develops methods to reduce instability. The second demonstrates that these methods do very well compared to alternatives, while still retaining the simplicity of the original approach. The resulting model is much more stable and provides a noticeably improved reading experience within Google Translate.

Transcribe (new) — Left: Source transcript as it arrives from speech recognition. Right: Translation that is displayed to the user. At the cost of a small delay, the translation now rarely needs to be corrected.

Evaluating Live Translation
Before attempting to make any improvements, it was important to first understand and quantifiably measure the different aspects of the user experience, with the goal of maximizing quality while minimizing latency and instability. In “Re-translation Strategies For Long Form, Simultaneous, Spoken Language Translation”, we developed an evaluation framework for live-translation that has since guided our research and engineering efforts. This work presents a performance measure using the following metrics:

  • Erasure: Measures the additional reading burden on the user due to instability. It is the number of words that are erased and replaced for every word in the final translation.
  • Lag: Measures the average time that has passed between when a user utters a word and when the word’s translation displayed on the screen becomes stable. Requiring stability avoids rewarding systems that can only manage to be fast due to frequent corrections.
  • BLEU score: Measures the quality of the final translation. Quality differences in intermediate translations are captured by a combination of all metrics.

It is important to recognize the inherent trade-offs between these different aspects of quality. Transcribe enables live-translation by stacking machine translation on top of real-time automatic speech recognition. For each update to the recognized transcript, a fresh translation is generated in real time; several updates can occur each second. This approach placed Transcribe at one extreme of the 3 dimensional quality framework: it exhibited minimal lag and the best quality, but also had high erasure. Understanding this allowed us to work towards finding a better balance.

Stabilizing Re-translation
One straightforward solution to reduce erasure is to decrease the frequency with which translations are updated. Along this line, “streaming translation” models (for example, STACL and MILk) intelligently learn to recognize when sufficient source information has been received to extend the translation safely, so the translation never needs to be changed. In doing so, streaming translation models are able to achieve zero erasure.

The downside with such streaming translation models is that they once again take an extreme position: zero erasure necessitates sacrificing BLEU and lag. Rather than eliminating erasure altogether, a small budget for occasional instability may allow better BLEU and lag. More importantly, streaming translation would require retraining and maintenance of specialized models specifically for live-translation. This precludes the use of streaming translation in some cases, because keeping a lean pipeline is an important consideration for a product like Google Translate that supports 100+ languages.

In our second paper, “Re-translation versus Streaming for Simultaneous Translation”, we show that our original “re-translation” approach to live-translation can be fine-tuned to reduce erasure and achieve a more favorable erasure/lag/BLEU trade-off. Without training any specialized models, we applied a pair of inference-time heuristics to the original machine translation models — masking and biasing.

The end of an on-going translation tends to flicker because it is more likely to have dependencies on source words that have yet to arrive. We reduce this by truncating some number of words from the translation until the end of the source sentence has been observed. This masking process thus trades latency for stability, without affecting quality. This is very similar to delay-based strategies used in streaming methods such as Wait-k, but applied only during inference and not during training.

Neural machine translation often “see-saws” between equally good translations, causing unnecessary erasure. We improve stability by biasing the output towards what we have already shown the user. On top of reducing erasure, biasing also tends to reduce lag by stabilizing translations earlier. Biasing interacts nicely with masking, as masking words that are likely to be unstable also prevents the model from biasing toward them. However, this process does need to be tuned carefully, as a high bias, along with insufficient masking, may have a negative impact on quality.

The combination of masking and biasing, produces a re-translation system with high quality and low latency, while virtually eliminating erasure. The table below shows how the metrics react to the heuristics we introduced and how they compare to the other systems discussed above. The graph demonstrates that even with a very small erasure budget, re-translation surpasses zero-flicker streaming translation systems (MILk and Wait-k) trained specifically for live-translation.

System     BLEU     Lag
(seconds)
    Erasure
Re-translation
(Transcribe old)
    20.4     4.1     2.1
+ Stabilization
(Transcribe new)
    20.2     4.1     0.1
Evaluation of re-translation on IWSLT test 2018 Engish-German (TED talks) with and without the inference-time stabilization heuristics of masking and biasing. Stabilization drastically reduces erasure. Translation quality, measured in BLEU, is very slightly impacted due to biasing. Despite masking, the effective lag remains the same because the translation stabilizes sooner.
Comparison of re-translation with stabilization and specialized streaming models (Wait-k and MILk) on WMT 14 English-German. The BLEU-lag trade-off curve for re-translation is obtained via different combinations of bias and masking while maintaining an erasure budget of less than 2 words erased for every 10 generated. Re-translation offers better BLEU / lag trade-offs than streaming models which cannot make corrections and require specialized training for each trade-off point.

Conclusion
The solution outlined above returns a decent translation very quickly, while allowing it to be revised as more of the source sentence is spoken. The simple structure of re-translation enables the application of our best speech and translation models with minimal effort. However, reducing erasure is just one part of the story — we are also looking forward to improving the overall speech translation experience through new technology that can reduce lag when the translation is spoken, or that can enable better transcriptions when multiple people are speaking.

Read More

Sign in to sites faster and personalize your lock screen

We’re always finding ways to make using Chromebooks as seamless as possible. Today, with our latest Chrome OS release, we’re introducing a faster sign-in experience as well as personalized lock screens. 

And in case you missed it, we’ll share the exciting new Chromebooks that were recently announced at CES 2021.

Faster and easier web sign-in

Forget the hassle of typing in a long password or trying to remember which one you use for a specific online account. Now you can securely sign in to websites with the PIN or fingerprint you’ve set up to unlock your Chromebook with our new Web Authentication (WebAuthn) feature. Websites that support WebAuthn will let you use your Chromebook PIN or fingerprint ID—if your Chromebook has a fingerprint reader—instead of the password you’ve set for the website. And if you use 2-Step Verification to sign-in, your Chromebook PIN or fingerprint ID can be used as the second factor, so you no longer need to pull out your security key or phone to authenticate.

To get started, sign in to a supported website like Dropbox, GitHub or Okta, and you’ll be prompted to switch to using WebAuthn for future sign-ins.

Image showing a web page with the WebAuthn tool pulled up. A pop-up on the screen says "verify your identity" and has spaces for numbers to be entered.

Beautify your space with a personalized lock screen

The Chrome OS screen saver lets you transform your Chromebook’s lock screen into a personalized smart display. Show off your favorite photo album from Google Photos or pick from hundreds of art gallery images. You can use your lock screen to check information like the current weather and what music is playing; you’ll also be able to pause a track or skip songs without unlocking your device. 

Go to your Chrome OS Settings  and select Personalization > Screen saver to turn it on now.

Image shows an Android tablet next to an Android tablet pen. On the screen is a photo of a mountain and behind it is a pink-hued sunset.

ICYMI: New Chromebooks announced at CES 2021 

Image showing three laptops.

From left to right: Samsung Galaxy Chromebook 2, ASUS Chromebook Flip C536 and Acer Chromebook Spin 514

Our partners, Acer, ASUS and Samsung, introduced five new Chromebooks earlier this month: The Acer Chromebook Spin 514 and the ASUS Chromebook Flip CM5 are among the first AMD Ryzen Chromebooks in the market and deliver great performance for work and play at an affordable price. There’s also the ASUS Chromebook Flip C536 and the ASUS Chromebook CX9, which are some of the first Chromebooks to come with the latest 11th generation Intel processors, so they’re a powerful option for working or streaming video. And the Samsung Galaxy Chromebook 2 is the first Chromebook to feature a QLED display; it has a thin, light design and comes in Fiesta Red and Mercury Gray. 

That’s all for now—but check back here in March when we’ll have more news about what’s coming to Chromebooks.

Read More

From Girl Scout to Waymonaut

Thumbnail: Girl Scouts in front of Waymo truck

Earlier this month, we teamed up with Girl Scouts of Northeast Texas (GSNETX) to transport cookies for the annual Girl Scout Cookie Program with our Waymo Via truck fleet. Girl Scouts has long encouraged girls at every age and from all different backgrounds to explore science, technology, engineering, and math (STEM) fields through their girl-centric programming. And at Waymo, we need a diverse group of people to develop the Waymo Driver and deliver on our mission, so we look for opportunities to open these dialogues with younger generations who will forge the future of autonomous driving technology.

Our new partnership with Girl Scouts of Northeast Texas inspired us to ask if any of our Waymonauts are Girl Scout alums and how their experience influenced their journey to Waymo. With the organization’s focus on STEM, entrepreneurism, and leadership skills, it’s no surprise we found many alums among us, with really inspiring stories about what being a Girl Scout meant to them personally, how they found themselves in STEM roles, and how those two things are very closely intertwined. 
Emily Warman, Software Engineer, Planning/Behavior

5 years as a Girl Scout in Dupage County, IL

“To me, [being a] girl meant inside voices, dolls, clothes I couldn’t get dirty, legs without bruises, being small. I felt limited. I wanted to be seen for things I was good at: how fast I could run, how high I could climb, how brave I was… Girl Scouts was the first time I felt like I was a part of a group of girls. I didn’t have to be anything I wasn’t in Girl Scouts. It was the first time I felt like being a girl was OK, maybe even great.”

Emily and her mom during her bridging ceremony
Maggie Graupera, Recruiter, Hardware Engineering

3 years as a Girl Scout in San Jose, CA

“I was a Girl Scout for several years. Cookie sales was a formative experience for me. The act of selling is a skill I started to harness at an early age through Girl Scouts. That experience prepared me in my career and ultimately led me to my role of selling the experience of working at Waymo (as a recruiter).”

Maggie and her pup

Megan Quick, System Engineer

16 years as a Girl Scout and Girl Scout camp counselor in Colorado

“I think that some parts of the Girl Scout curriculum really helped with understanding STEM careers, and introducing STEM and engineering concepts. I became a mechanical engineer and that was influenced by many things, but the hands-on experiences that I got at the Girl Scout camps definitely gave me early experience with this. I also think that my leadership experience that I got in the Girl Scouts likely helped me to get a scholarship to college.”

Megan working in one of our hardware labs

Michelle Peacock, Global Head of Policy and Government Relations
4 years as a Girl Scout in Pasco, Washington

“What does being a Girl Scout mean to me? The answer can be found in the Girl Scout Promise. ‘On my honor, I will try to serve God and my country, to help people at all times, and to live by the Girl Scout Law.’ I literally use this pledge to think about how I should move about my life every single day. I also try to help those who reach out to me for career advice or networking. I do whatever it takes to make the time because that’s how you learn! I stay in touch with many women I’ve worked with throughout my career and am super proud to see them grow and be successful.”
Michelle’s daughter, Mary Charlotte, during her days as a Brownie

Sandy Karp, Senior Communications Associate

7 years as a Girl Scout in Peninsula Bay Area

“I’d say there are a lot of similarities between our mission at Waymo and the Girl Scout Law. At Waymo, we’re teaching the Waymo Driver to be friendly and helpful, considerate and caring, to respect authorities, and use resources wisely, and to make the world (and our roads) a better place.”
Sandy showing off her Waymo spirit!

Image caption and source: Girl Scouts of Northeast Texas cheer the new branding on a Waymo Via truck used to fulfill Girl Scout Cookie Program logistics (GSNETX staff photographer)

Read More

Lifecycle of a container on Cloud Run

Editor’s note: Today’s post comes from  Wietse Venema, a software engineer and trainer at Binx.io and the author of the O’Reilly book about Google Cloud Run. In today’s post, Wietse shares how understand the full container lifecycle, and the possible state transitions within it, so you can make the most of Cloud Run.

Serverless platform Cloud Run runs and autoscales your container-based application.  You can make the most of this platform when you understand the full container lifecycle and the possible state transitions within it. Let’s review the states, from starting to stopped. 

figure 1

First, some context for those who have never heard of Cloud Run before (if you have, skip down to “Starting a Container”). The developer workflow on Cloud Run is a straightforward, three-step process:

  1. Write your application using your favorite programming language. Your application should start an HTTP server. 
  2. Build and package your application into a container image. 
  3. Deploy the container image to Cloud Run. 

Once you deploy your container image, you’ll get a unique HTTPS endpoint back. Cloud Run then starts your container on demand to handle requests and ensures that all incoming requests are handled by dynamically adding and removing containers. Explore the hands-on quickstart to try it out for yourself. 

It’s important to understand the distinction between a container image and a container. A container image is a package with your application and everything it needs to run; it’s the archive you store and distribute. A container represents the running processes of your application.

Figure 2

You can build and package your application into a container image in multiple ways. Docker gives you low-level control and flexibility. Jib and Buildpacks offer a higher-level, hands-off experience. You don’t need to be a container expert to be productive with Cloud Run, but if you are, Cloud Run won’t be in your way. Choose the containerization method that works best for you and your project.

Starting a Container

Figure 3

When a container starts, the following happens:

  1. Cloud Run creates the container’s root filesystem by materializing the container image. 
  2. Once the container filesystem is ready, Cloud Run runs the entrypoint program of the container (your application).  
  3. While your application is starting, Cloud Run continuously probes port 8080 to check whether your application is ready. (You can change the port number if you need to.)
  4. Once your application starts accepting TCP connections, Cloud Run forwards incoming HTTP requests to your container.

Remember, Cloud Run can only deploy container images that are stored in a Docker repository on Artifact Registry. However, it doesn’t pull the entire image from there every time it starts a new container. That would be needlessly slow. 

Figure 4

Instead, Cloud Run pulls your container image from Artifact Registry only once, when you deploy a new version (called a revision on Cloud Run). It then makes a copy of your container image and stores it internally.

The internal storage is fast, ensuring that your image size is not a bottleneck for container startup time. Large images load as quickly as small ones. That’s useful to know if you’re trying to improve cold start latency. A cold start happens when a request comes in and no containers are available to handle it. In this case, Cloud Run will hold the request while it starts a new container. 

If you want to be sure a container is always available to handle requests, configure minimum instances, which will help reduce the number of cold starts.  

Because Cloud Run copies the image, you won’t get into trouble if you accidentally delete a deployed container image from Artifact Registry. The copy ensures that your Cloud Run service will continue to work. 

Serving Requests

Figure 5

When a container is not handling any requests, it is considered idle. On a traditional server, you might not think twice about this. But on Cloud Run, this is an important state:

  • An idle container is free. You’re only billed for the resources your container uses when it is starting, handling requests (with a 100ms granularity), or shutting down.  
  • An idle container’s CPU is throttled to nearly zero. This means your application will run at a really slow pace. That makes sense, considering this is CPU time you’re not paying for. 

When your container’s CPU is throttled, however, you can’t reliably perform background tasks on your container. Take a look at Cloud Tasks if you want to reliably schedule work to be performed later. 

Figure 6

When a container handles a request after being idle, Cloud Run will unthrottle the container’s CPU instantly. Your application — and your user — won’t notice any lag. 

Cloud Run can keep idle containers around longer than you might expect, too, in order to handle traffic spikes and reduce cold starts. Don’t count on it, though. Idle containers can be shut down at any time.

Shutting Down

Figure 7

If your container is idle, Cloud Run can decide to stop it. By default, a container just disappears when it is shut down. 

However, you can build your application to handle a SIGTERM signal (a Linux kernel feature). The SIGTERM signal warns your application that shutdown is imminent. That gives the application 10 seconds to clean things up before the container is removed, such as closing database connections or flushing buffers with data you still need to send somewhere else. You can learn how to handle SIGTERMs on Cloud Run so that your shutdowns will be graceful rather than abrupt.

Figure 8

So far, I’ve looked at Cloud Run’s happy state transitions. What happens if your application crashes and stops while it is handling requests? 

When Things Go Wrong

figure 9

Under normal circumstances, Cloud Run never stops a container that is handling requests. However, a container can stop suddenly in two cases: if your application exits (for instance due to an error in your application code) or if the container exceeds the memory limit. 

If a container stops while it is handling requests, it takes down all its in-flight requests at that time: Those requests will fail with an error. While Cloud Run is starting a replacement container, new requests might have to wait. That’s something you’ll want to avoid.

figure 10

You can avoid running out of memory by configuring memory limits. By default, a container gets 256MB of memory on Cloud Run, but you can increase the allocation to 4GB. Keep in mind, though, if your application allocates too much memory, Cloud Run will also stop the container without a SIGTERM warning.

Summary

In this post, you learned about the entire lifecycle of a container on Cloud Run, from starting to serving and shutting down. Here are the highlights: 

  • Cloud Run stores a local copy of your container image to load it really fast when it starts a container.
  • A container is considered idle when it is not serving requests. You’re not paying for idle containers, but their CPU is throttled to nearly zero. Idle containers can be shut down. 
  • With SIGTERM you can shut down gracefully, but it’s not guaranteed to happen. Watch your memory limits and make sure errors don’t crash your application. 

I’m a software engineer and trainer at Binx.io and the author of the O’Reilly book about Google Cloud Run (read the full chapter outline). Connect with me on Twitter: @wietsevenema (open DMs).

Read More

How our customers modernize business intelligence with BigQuery and Looker

Businesses increasingly gather data to better understand their customers, products, marketing, and more. But unlocking valuable and meaningful insights from that data requires powerful, reliable, and scalable solutions. We hear from our BigQuery and Looker customers that they’ve been able to modernize business intelligence (BI) and allow self-service discovery on the data the business collects. Insights are quickly made available not just to data scientists or data analysts, but to everyone in your organization, including key business decision-makers.  

In this post, we hear from several Google Cloud customers who’ve used BigQuery and Looker and how they’re using their data insights to unlock new opportunities. 

Data analysis, accelerated

Sunrun, the leader in residential solar power, offers clean, reliable, affordable solar energy and battery storage solutions. With the increasing demand for renewable energy, Sunrun needed a better way to manage their growing volumes of data across installation operations, installed systems, customer operations, and sales.  

Their legacy data stack required IT and data team support for almost every internal data request. Sunrun’s legacy Oracle data warehouse wasn’t equipped to scale across growing analytics demands or easily unlock predictive insights, and this limitation led to data silos and conflicts. 

After their evaluation process, Sunrun migrated to Google Cloud’s smart analytics platform—including BigQuery and Looker —to reduce extract, transform, and load (ETL) complexity, run fast queries with ease, and make data accessible and trusted throughout the organization. 

Key benefits

  • Optimization of construction processes through insights into productivity and labor data, making planning more efficient and identifying areas of opportunity.

  • A 50% reduction in data warehouse design time, ETL, and data modeling.

  • A reduction of their entire data development cycle by more than 60% to enable accelerated decision-making with a modernized, simplified architecture.

  • An enablement of self-service analytics across their core business through a hub-and-spoke analytics model, ensuring all metrics are governed and trusted.

  • A unification of metric definitions throughout the company with LookML, Looker’s modeling layer.

  • Looker dashboards that facilitate regular executive huddles to set and execute data-driven strategies based on a single source of truth.

With Looker, Sunrun was able to bring the IT and business sides of the organization closer together, and improve their ability to recognize trends across their retail business, including the performance and impact of their relationships with major retail partners. Across Sunrun, data is analyzed with the customer’s experience and business goals in mind. Since Sunrun’s migration from their on-premises legacy data stack to a modern cloud environment, they’ve created infrastructure and business-wide efficiencies to help them meet the growing demand for solar power.

Data modernization is key to digital transformation. Legacy systems are slow, expensive, and can not keep up with changing requirements. Data teams often spend more time on data pipelines than they do on data analysis or data science.

Business intelligence you can build upon

After relying upon Excel workbooks for data analysis, Emery Sapp & Sons, a heavy civil engineering company, chose BigQuery and Looker as key components of a new data stack that could scale with their business growth. This unified their wide variety of data sources and provided them with a holistic view of the business. Looker met their need to enable user-friendly self-service across the organization, so that all teams could access and act on accurate data through a business-user friendly interface, all with minimal maintenance.

Key benefits

  • Pre-built, automated cost and payroll reports in Looker deliver data on schedule in a fraction of the time that Emery Sapp & Sons teams used to spend generating reports.

  • A weekly profitability and accounts receivable dashboard with real-time data allows them to better predict cash flows and provide guidance on which customers they need to be talking with.

  • Tracking of Zendesk support tickets in Looker easily shows what’s open, urgent, high priority, pending, and closed, allowing them to identify trends.

  • Instant access to total outstanding amounts and bills owing reports for the accounts receivable team. Branch managers can sort that information by customer and prioritize follow-up communications. 

Now able to visualize the necessary information intuitively, Emery Sapp & Sons can quickly understand and act upon important data. Since modernizing their data stack, they’ve cut hours they once spent on manual activities and freed up time to concentrate on what the data means for their business. They can now focus on strategic initiatives that will fuel their growth and serve their customers.

Advancing care in an uncertain time

Commonwealth Care Alliance (CCA) is a community-based healthcare organization providing and coordinating care for high-need individuals who are often vulnerable or marginalized. At the first signs of COVID-19 last winter, CCA knew their members would need enhanced care and attention. Their staff and clinicians would need reliable data that was available quickly and integrated across many domains and sources. 

Fortunately, they had already put in place an advanced analytics platform with BigQuery and Looker, which the CCA data science team has used to deliver valuable information and predictive insights to CCA’s clinicians, and to develop and deploy data ops and machine learning (ML) ops capabilities. All of Google Cloud was available under a single business associate agreement (BAA) to meet CCA’s HIPAA requirements, and BigQuery proved elastic and available as a service. These two features offered reliable platform performance and allowed the small data science team to stay focused and nimble while remaining compliant.

Using a query abstraction and a columnar-based data engine, CCA could adapt to clinicians’ changing needs and provide data and predictive insights via general dashboards and role-specific dashboards—internally referred to as action boards, which help clinicians decide how to react to the specific needs of each member.  

Key benefits

  • Regular updates to BigQuery and Looker from CCA’s internal care management platform and electronic health records.

  • Quick creation and distribution of custom concepts—such as “high risk for COVID-19”—in Looker’s flexible modeling layer, LookML. 

  • Tailored dashboards allow each clinician and care manager to access data relevant to their members, including recommended actions for coordinated care.

  • Looker’s user attributes and permissions integrate with data, such as service disruptions, to allow clinicians to understand and react to changing conditions.

Using BigQuery and Looker, CCA’s data science team provides secure, companywide access to trusted data without burdening internal resources. As the COVID-19 pandemic and its effects continue to evolve, CCA continually uses the latest available information to update and guide their member support and care strategies. Now, the data science team can move on to deeper feature engineering and causal inference to enrich the insights delivered to their clinicians and the care provided to their members.

Commonwealth Care Alliance® (CCA) is a not-for-profit, community-based healthcare organization nationally recognized as a leader in innovating, providing, and coordinating care for complex populations.  When the COVID-19 outbreak began, the CCA team pointed their analytics platform at COVID-19 at once and infused COVID-19 information throughout clinical reporting. Since day one, they have been rapidly iterating on insights to keep pace with the COVID-19 pandemic, ensuring their clinicians have access to the latest information.

Saving $10,000 a month and more

Label Insight helps retailers and brands stay on top of trends and market demand by analyzing the packaging and labeling of different products. Their customers use this information to inform decisions around repackaging existing products or creating new products that are in line with the latest dietary trends. 

Before, with their on-premises legacy BI system, numerous data silos, and cumbersome processes, it became increasingly costly, complicated, and time-consuming to quickly extract helpful insights from the data. Though Label Insight had rich data sets, accessing them would often take one person an entire week of analysis. This process was not scalable, repeatable, or reliable. 

Today, Label Insight’s new data platform includes BigQuery as their data warehouse and Looker for business intelligence. When evaluating data warehouse offerings, their executive team found that the more they used BigQuery, the more they’d receive significant benefits and ROI for the company. BigQuery now offers them virtually infinite, cost-effective, scalable storage capacity and unrivalled performance.

With easy-to-set-up dashboards, reporting, and analytics, Looker democratizes data for users across the entire Label Insight organization. Looker also enables governance and control, helping them make use of the high-quality data in BigQuery, and freeing up their data team from constantly managing reporting requests. With Looker’s ability to integrate insights via embedded analytics into its existing applications like Slack, Label Insight can access consistent, accurate data in their favorite task management tools, enabling everyone to continue providing value to their customers.

Key benefits

  • An ROI of 200%, with a savings of 120 labor hours on reporting per week, which has opened up time and resources for their teams to pursue new initiatives.

  • A recurring savings amounting to $10,000/month.

  • An approximately 60% (and growing) user engagement score on the platform, and with the help of their Looker superusers, goals to continue growing that number.

  • Extract, transform and load (ETL) automation with Fivetran provides quick and easy access to data across their 17 different sources.

Modernizing Label Insight’s data technology stack has transformed their business in all the ways they were hoping for. 

Home-run engagement for fans and clubs

The fan data engineering team at Major League Baseball (MLB) is responsible for managing more than 350 data pipelines to ingest data from third-party and internal sources and centralize it in an enterprise data warehouse (EDW). That EDW drives data-related initiatives across the internal product, marketing, finance, ticketing, shop, analytics, and data science departments, and from all 30 MLB Clubs. The team had previously used Teradata as their EDW.

MLB was experiencing issues such as query failures and latency and synchronization problems with their EDW. Providing direct user access was often challenging due to network connectivity restrictions and client software setup issues. With a migration from Teradata to BigQuery completed in 2019, MLB has realized numerous benefits from their modern, cloud-first data warehouse platform.

Key benefits

  • Side-by-side performance tests run with minimal cost and no commitment. By switching from on-demand to flat-rate pricing, MLB could fix costs, avoid surprise overages, and share unused capacity between departments.

  • Data democratization boosted by the secure, one-click sharing of datasets with any Workspace user or group. 

  • Access to BigQuery’s web console to review and run SQL queries on data, and to use Connected Sheets to analyze large data sets with pivot tables in a familiar interface. 

  • A 50% increase in query completion speed compared with the previous EDW. 

  • Integrations with several services MLB uses, including Google Ads, Google Campaign Manager, and Firebase.

  • Integration of BigQuery with Looker, MLB’s new BI tool, which provides a clean and high-performing interface for business users to access and drill into data. 

  • A reduction in operational overhead of the previous database administration.

  • Support coverage by Google for any major service issues, letting IT teams focus on more strategic work.

MLB can now take a more comprehensive and frictionless approach to using data to serve their fans and the league. Two projects already facilitated by their move to BigQuery and Looker include:

  • OneView: This initiative compiles over 30 pertinent data sources into a single table, with one row per fan, to facilitate downstream personalization and segmentation initiatives like news article personalization. 

  • Real-time form submission reporting: By using the Google-provided Dataflow template to stream data from Pub/Sub in real time to BigQuery, MLB creates Looker dashboards with real-time reporting on form submissions for initiatives such as their “Opening Day Pick ‘Em” contest. This allows their editorial team to create up-to-the-minute analyses of results.

With MLB’s new data stack up and running, they’re able to serve data stakeholders better than ever before, and can harness their new data-driven capabilities to create better online and in-person experiences for their fans.

MLB has migrated to Google BigQuery and Looker. They're redesigning data transformation, automating data pipelines, and delivering near-real-time analytics to their data-driven organization. But transforming the data stack at MLB has been more than just migrating technologies. It’s been a rethinking of how data's collected, stored and processed throughout the organization. This practitioner-focused session will discuss tips and possible pitfalls for how to address modernizing a data stack and the data engineering challenges that come with the process.

Ready to modernize your business intelligence? Explore the combined data analytics solution of BigQuery and Looker.

Read More

BeyondCorp Enterprise: Introducing a safer era of computing

Security issues continue to disrupt the status quo for global enterprises. Recent incidents highlight the need to re-think our security plans and operations; attackers are getting smarter, attacks are more sophisticated, and assumptions about what is and isn’t locked down no longer hold. The challenge, however, is to enable disruptive innovation in security without disrupting security operations. 

Today, we’re excited to announce the general availability of Google’s comprehensive zero trust product offering, BeyondCorp Enterprise, which extends and replaces BeyondCorp Remote Access. Google is no stranger to zero trust—we’ve been on this journey for over a decade with our own implementation of BeyondCorp, a technology suite we use internally to protect Google’s applications, data, and users. BeyondCorp Enterprise brings this modern, proven technology to organizations so they can get started on their own zero trust journey. Living and breathing zero trust for this long, we know that organizations need a solution that will not only improve their security posture, but also deliver a simple experience for users and administrators.

BeyondCorp Enterprise.jpg

A modern, proven, and open approach to zero trust

Because our own zero trust journey at Google has been ongoing for a decade, we realize customers can’t merely flip a switch to make zero trust a reality in their own organizations, especially given varying resources and computing environments that might look different than ours. Nonetheless, these enterprises understand the zero trust journey is an imperative. 

As a result, we’ve invested many years in bringing our customers a solution that is cost-effective and requires minimal disruption to existing deployments and business processes, using trust, reliability and scale as our primary design criteria.

proven modern and open platform.jpg

The end result is, BeyondCorp Enterprise, delivering three key benefits to customers and partners:

1) A scalable, reliable zero trust platform in a secure, agentless architecture, including: 

2) Continuous and real-time end-to-end protection

  • Embedded data and threat protection, newly added to Chrome, to prevent malicious or unintentional data loss and exfiltration and malware infections from the network to the browser.

  • Strong phishing-resistant authentication to ensure that users are who they say they are. 

  • Continuous authorization for every interaction between a user and a BeyondCorp-protected resource.  

  • End-to-end security from user to app and app to app (including microsegmentation) inspired by the BeyondProd architecture.

  • Automated public trust SSL certificate lifecycle management for internet-facing BeyondCorp endpoints powered by Google Trust Services

3) A solution that’s open and extensible, to support a wide variety of complementary solutions 

  • Built on an expanding ecosystem of technology partners in our BeyondCorp Alliance which democratizes zero trust and allows customers to leverage existing investments.

  • Open at the endpoint to incorporate signals from partners such as Crowdstrike and Tanium, so customers can utilize this information when building access policies.

  • Extensible at the app to integrate into best-in-class services from partners such as Citrix and VMware.

In short, if cloud-native zero trust computing is the future—and we believe it is—then our solution is unmatched when it comes to providing scale, security and user experience. With BeyondCorp Enterprise, we are bringing our proven, scalable platform to customers, meeting their zero trust requirements wherever they are.

Customers are committed to zero trust

We’ve worked with customers around the world to battle-test our BeyondCorp Enterprise technology and to help them build a more secure foundation for a modern, zero-trust architecture within their organization. Vaughn Washington, VP of Engineering at Deliveroo, a global food delivery company headquartered in the UK, says, “We love that BeyondCorp Enterprise makes it so easy to bring the zero trust model to our distributed workforce. Having secure access to applications and associated data is critical for our business. With BeyondCorp Enterprise, we manage security at the app level, which removes the need for traditional VPNs and associated risks. With BeyondCorp Enterprise and Chrome Enterprise working together, we have additional visibility and controls to help us keep our data secure.”

“We want to improve the experience for our developers and continue to raise the bar on our security posture by adopting a zero trust architecture. Google’s experience with zero trust and the capabilities of BeyondCorp Enterprise made them an ideal partner for our journey,” said Tim Collyer, Director of Enterprise Information Security at Motorola Solutions, Inc.

Support from a robust ecosystem of partners

Our partners are key to our effort to further promote and democratize this technology. The BeyondCorp Alliance allows customers to leverage existing controls to make adoption easier while adding key functionality and intelligence that enables customers to make better access decisions. Check Point, Citrix, CrowdStrike, Jamf, Lookout, McAfee, Palo Alto Networks, Symantec (a division of Broadcom), Tanium and VMware are members of our Alliance who share our vision.

“As we enter a new era of security, enterprises want a seamless security model attuned to the realities of remote work, cloud applications, and mobile communications. Zero trust is that model, and critical to its efficacy is the ability to readily assess the health of endpoints. Who is accessing them? Do they contain vulnerabilities? Are they patched and compliant?” said Orion Hindawi, co-founder and CEO of Tanium. “With Google Cloud, we’re on a journey to offer today’s distributed businesses joint solutions that provide visibility and control into activities across any network to any application for both users and endpoints.”

Matthew Polly, VP WW Alliances, Channels, and Business Development at CrowdStrike said, “In today’s complex threat environment, zero trust security is fundamental for successful protection. BeyondCorp Enterprise customers will be able to seamlessly leverage the power of the CrowdStrike Falcon platform to deliver complete protection through verified access control to their business data and applications and secure their assets and users from the sophisticated tactics of cyber adversaries, including lateral movement.” 

“The rapid move to the cloud and remote work are creating dynamic work environments that promise to drive new levels of productivity and innovation. But they have also opened the door to a host of new security concerns and sparked a significant increase in cyberattacks,” said Fermin Serna, Chief Information Security Officer, Citrix. “To defend against them, enterprises must take an intelligent approach to workspace security that protects employees without getting in the way of their experience following the zero trust model. And with Citrix Workspace and BeyondCorp Enterprise, they can do just this.”

Dan Quintas, Sr. Director of Product Management at VMware also added, “Google’s commitment to security is clear and in today’s environment, device access policies are a key piece of the zero trust framework. Using Workspace ONE integrations in BeyondCorp Enterprise, customers can leverage device compliance status information to protect corporate information and ensure their users stay productive and secure.”

We also continue to collaborate with Deloitte’s industry-leading cyber practice to deliver end-to-end architecture, design, and deployment services to assist our customers’ zero-trust journeys.

“Implementing and operationalizing a zero trust architecture is critically important for organizations today,” said Deborah Golden, Deloitte Risk & Financial Advisory Cyber & Strategic Risk leader and principal, Deloitte & Touche LLP. “Both Google Cloud and Deloitte are well positioned to deliver this secure transformative change for our clients and together provide a modern security approach that’s seamless to integrate into existing infrastructures.”

Take the next step

The adoption of zero trust is an imperative for security modernization, and BeyondCorp Enterprise can help organizations overcome the challenges that come with the embrace of such a disruptive innovation. To learn more about BeyondCorp Enterprise, register for our upcoming webinar on Feb 23 and be sure to check out our BeyondCorp product home page.

To learn more about the security features of Chrome Enterprise, including the new threat and data protection features available in BeyondCorp Enterprise, attend our upcoming webinar on January 28 by registering here.

Read More

Extending enterprise zero trust models to the web

For over a decade, Chrome has been committed to advancing security on the web, and we’re proud of the end-user and customer safety improvements we’ve delivered over the years. We take our responsibility seriously, and we continue to work on ways to better protect billions of users around the world, whether it’s driving the industry towards HTTPS, introducing and then advancing the concept of a browser sandbox, improving phishing and malware detection via Safe Browsing improvements or working alongside Google’s Project Zero team to build innovative exploit mitigations. 

To continue our work of making a safer web for everyone, we’ve partnered with Google’s Cloud Security team to expand what enterprises should expect from Chrome and web security. Today the Cloud Security team is announcing BeyondCorp Enterprise, our new zero trust product offering, built around the principle of zero trust: that access must be secured, authorized and granted based on knowledge of identities and devices, and with no assumed trust in the network. With Chrome, BeyondCorp Enterprise is able to deliver customers a zero trust solution that protects data, better safeguards users against threats in real time and provides critical device information to inform access decisions, all without the need for added agents or extra software. These benefits are built right into Chrome, where users are already spending much of their workday accessing the apps and resources they need to be productive, and IT teams can easily manage these controls right through our Chrome Browser Cloud Management offering.

By extending zero trust principles to Chrome, we’re introducing the following advanced security capabilities that will help keep users and their company data safer than ever before:

Enhanced malware and phishing prevention: BeyondCorp Enterprise allows for real-time URL checks and deep scanning of files for malware.

Notification that reads "sample.zip is dangerous, so Chrome has blocked it."

Sensitive data protection across the web:IT teams can enforce a company’s customized rules for what types of data can be uploaded, downloaded or copied and pasted across sites.

Notification that reads "This file has sensitive or dangerous content. Remove this content and try again.

Visibility and insights: Organizations can get more insights into potential risks or suspicious activity through cloud-based reporting, including tracking of malicious downloads on corporate devices or employees entering passwords on known phishing sites. 

Three bar charts labeled "Chrome high risk users," "Chrome high risk domains," and "Chrome data protection summary."

Including Chrome in your zero trust strategy is critical not only because your employees spend much of the working day in the browser, but also because Chrome is in a unique position to identify and prevent threats across multiple web-based apps. Enhanced capabilities surrounding data protection and loss prevention protects organizations from both external threats and internal leak risks, many of which may be unintentional. We’ve built these capabilities into Chrome in a way that gives IT and security teams flexibility around how to configure policies and set restrictions, while also giving administrators more visibility into potentially harmful or suspicious activities. Naturally, these threat and data protections are also extended to Chrome OS devices, which offer additional proactive and built-in security protections.  

As with many of the major security advances Chrome has introduced in the past, we know it takes time to adopt new approaches. We’re here to help with a solution that is both simple and more secure for IT teams and their users. As you look at 2021 and where your security plans will take you, check out BeyondCorp Enterprise

Chrome will host a webinar on Thursday, January 28, highlighting some of our recent enterprise enhancements, and offering a preview of what’s to come in 2021. We’ll also talk more about the Chrome-specific capabilities of BeyondCorp Enterprise. We hope you can join us!

Read More