Monthly:June 2020

New IT Cost Assessment program: Unlock value to reinvest for growth

If you’re in IT, chances are you’re under pressure to prioritize investments and optimize costs in response to the current economic climate. According to a recent survey of our customers1, that situation describes 84% of IT decision makers. Likewise, Forrester Research has said CIOs could face a minimum of 5% budget cuts in 20202, and IDC is forecasting a 5.1% decline in worldwide IT spending3. These are sobering numbers. 

Here at Google Cloud, we understand the need for clear, actionable ways to optimize your IT costs—and the flexibility to adjust your IT spend to the most critical areas dynamically. To help, we developed a new IT Cost Assessment program that lets you understand how your company’s IT spend compares to your industry peers, so you can quickly identify key areas of opportunity to unlock value to reinvest for growth. 

Google Cloud has a proven and structured approach to validate these IT cost reduction opportunities. Every business is unique, but knowing where you stand relative to your industry peers is an invaluable piece of insight when strategizing how to survive in this new economic reality. The first thing we do with our IT cost assessment is analyze your individual IT spend and compare it to industry benchmark data derived from our extensive experience working with clients and trusted third-party research firms, providing you a view of cost optimization opportunities. 

Then, in a second phase, we propose Google Cloud solutions best aligned to helping you reap the benefits of IT cost reductions, reduce physical infrastructure complexity, leverage hybrid-cloud strategy and enhance security, compliance and flexibility. In addition, our differentiated capabilities across AI/ML & Big Data can help you identify opportunities to optimize processes and drive additional operational efficiencies. 

Once you have this baseline of your performance, we deliver a detailed TCO analysis, ROI projections, and an implementation plan, with Google Cloud solutions that will help you migrate and modernize your legacy environment and deliver a positive impact to your bottom line.

We have partnered with leading enterprise companies in manufacturing, financial services, healthcare and life sciences, and insurance sectors, among others and delivered cost savings across their IT environments. In the aforementioned customer survey, three out of four respondents reported savings of up to 30% in the first 6 months of becoming a Google Cloud customer. And presented with the statement, “Google Cloud helped me increase our operational efficiency and optimize IT spend,” nine in ten agreed.

Click here to learn more about the IT Cost Assessment program, and to request an engagement. We look forward to helping you navigate—and thrive—through these challenging times.


1. TechValidate survey of 122 Google Cloud customers.
2. Where To Adjust Tech Budgets In The Pandemic Recession, Forrester, May 19, 2020
3. International Data Corp., https://www.idc.com/getdoc.jsp?containerId=prUS46268520

Read More

Filter out disruptive noise in Google Meet

Quick launch summary 

To help limit interruptions to your meeting, Google Meet can now intelligently filter out background noise like keyboard typing, doors opening and closing, and construction outside your window. Cloud-based AI is used to remove noise from your audio input while still letting your voice through. 
We had previously announced this top-requested feature and are now beginning to roll it out to G Suite Enterprise and G Suite Enterprise for Education customers using Meet on the web. We will bring the feature to mobile users soon, and will announce on the G Suite Updates blog when it’s available. 

Getting started 

Rollout pace 

  • Now available to all web users in most countries. 
  • For users in Australia, Brazil, India, Japan, and New Zealand, extended rollout (potentially longer than 15 days for feature visibility) starting on June 30, 2020. 
  • Not currently available in some countries (currently including South Africa, the UAE, and surrounding locales). See our Help Center for more availability details

Availability 

  • Available to G Suite Enterprise, G Suite Enterprise for Education customers* 
  • Not available to G Suite Basic, G Suite Business, G Suite for Education, and G Suite for Nonprofits customers 

Resources 

Roadmap 

*Availability in alternative packages is variable and based on your services.

Read More

System hardening in Android 11

Posted by Platform Hardening Team

In Android 11 we continue to increase the security of the Android platform. We have moved to safer default settings, migrated to a hardened memory allocator, and expanded the use of compiler mitigations that defend against classes of vulnerabilities and frustrate exploitation techniques.

Initializing memory

We’ve enabled forms of automatic memory initialization in both Android 11’s userspace and the Linux kernel. Uninitialized memory bugs occur in C/C++ when memory is used without having first been initialized to a known safe value. These types of bugs can be confusing, and even the term “uninitialized” is misleading. Uninitialized may seem to imply that a variable has a random value. In reality it isn’t random. It has whatever value was previously placed there. This value may be predictable or even attacker controlled. Unfortunately this behavior can result in a serious vulnerability such as information disclosure bugs like ASLR bypasses, or control flow hijacking via a stack or heap spray. Another possible side effect of using uninitialized values is advanced compiler optimizations may transform the code unpredictably, as this is considered undefined behavior by the relevant C standards.

In practice, uses of uninitialized memory are difficult to detect. Such errors may sit in the codebase unnoticed for years if the memory happens to be initialized with some “safe” value most of the time. When uninitialized memory results in a bug, it is often challenging to identify the source of the error, particularly if it is rarely triggered.

Eliminating an entire class of such bugs is a lot more effective than hunting them down individually. Automatic stack variable initialization relies on a feature in the Clang compiler which allows choosing initializing local variables with either zeros or a pattern.

Initializing to zero provides safer defaults for strings, pointers, indexes, and sizes. The downsides of zero init are less-safe defaults for return values, and exposing fewer bugs where the underlying code relies on zero initialization. Pattern initialization tends to expose more bugs and is generally safer for return values and less safe for strings, pointers, indexes, and sizes.

Initializing Userspace:

Automatic stack variable initialization is enabled throughout the entire Android userspace. During the development of Android 11, we initially selected pattern in order to uncover bugs relying on zero init and then moved to zero-init after a few months for increased safety. Platform OS developers can build with `AUTO_PATTERN_INITIALIZE=true m` if they want help uncovering bugs relying on zero init.

Initializing the Kernel:

Automatic stack and heap initialization were recently merged in the upstream Linux kernel. We have made these features available on earlier versions of Android’s kernel including 4.14, 4.19, and 5.4. These features enforce initialization of local variables and heap allocations with known values that cannot be controlled by attackers and are useless when leaked. Both features result in a performance overhead, but also prevent undefined behavior improving both stability and security.

For kernel stack initialization we adopted the CONFIG_INIT_STACK_ALL from upstream Linux. It currently relies on Clang pattern initialization for stack variables, although this is subject to change in the future.

Heap initialization is controlled by two boot-time flags, init_on_alloc and init_on_free, with the former wiping freshly allocated heap objects with zeroes (think s/kmalloc/kzalloc in the whole kernel) and the latter doing the same before the objects are freed (this helps to reduce the lifetime of security-sensitive data). init_on_alloc is a lot more cache-friendly and has smaller performance impact (within 2%), therefore it has been chosen to protect Android kernels.

Scudo is now Android’s default native allocator

In Android 11, Scudo replaces jemalloc as the default native allocator for Android. Scudo is a hardened memory allocator designed to help detect and mitigate memory corruption bugs in the heap, such as:

Scudo does not fully prevent exploitation but it does add a number of sanity checks which are effective at strengthening the heap against some memory corruption bugs.

It also proactively organizes the heap in a way that makes exploitation of memory corruption more difficult, by reducing the predictability of the allocation patterns, and separating allocations by sizes.

In our internal testing, Scudo has already proven its worth by surfacing security and stability bugs that were previously undetected.

Finding Heap Memory Safety Bugs in the Wild (GWP-ASan)

Android 11 introduces GWP-ASan, an in-production heap memory safety bug detection tool that’s integrated directly into the native allocator Scudo. GWP-ASan probabilistically detects and provides actionable reports for heap memory safety bugs when they occur, works on 32-bit and 64-bit processes, and is enabled by default for system processes and system apps.

GWP-ASan is also available for developer applications via a one line opt-in in an app’s AndroidManifest.xml, with no complicated build support or recompilation of prebuilt libraries necessary.

Software Tag-Based KASAN

Continuing work on adopting the Arm Memory Tagging Extension (MTE) in Android, Android 11 includes support for kernel HWASAN, also known as Software Tag-Based KASAN. Userspace HWASAN is supported since Android 10.

KernelAddressSANitizer (KASAN) is a dynamic memory error detector designed to find out-of-bound and use-after-free bugs in the Linux kernel. Its Software Tag-Based mode is a software implementation of the memory tagging concept for the kernel. Software Tag-Based KASAN is available in 4.14, 4.19 and 5.4 Android kernels, and can be enabled with the CONFIG_KASAN_SW_TAGS kernel configuration option. Currently Tag-Based KASAN only supports tagging of slab memory; support for other types of memory (such as stack and globals) will be added in the future.

Compared to Generic KASAN, Tag-Based KASAN has significantly lower memory requirements (see this kernel commit for details), which makes it usable on dog food testing devices. Another use case for Software Tag-Based KASAN is checking the existing kernel code for compatibility with memory tagging. As Tag-Based KASAN is based on similar concepts as the future in-kernel MTE support, making sure that kernel code works with Tag-Based KASAN will ease in-kernel MTE integration in the future.

Expanding existing compiler mitigations

We’ve continued to expand the compiler mitigations that have been rolled out in prior releases as well. This includes adding both integer and bounds sanitizers to some core libraries that were lacking them. For example, the libminikin fonts library and the libui rendering library are now bounds sanitized. We’ve hardened the NFC stack by implementing both integer overflow sanitizer and bounds sanitizer in those components.

In addition to the hard mitigations like sanitizers, we also continue to expand our use of CFI as an exploit mitigation. CFI has been enabled in Android’s networking daemon, DNS resolver, and more of our core javascript libraries like libv8 and the PacProcessor.

The effectiveness of our software codec sandbox

Prior to the Release of Android 10 we announced a new constrained sandbox for software codecs. We’re really pleased with the results. Thus far, Android 10 is the first Android release since the infamous stagefright vulnerabilities in Android 5.0 with zero critical-severity vulnerabilities in the media frameworks.

Thank you to Jeff Vander Stoep, Alexander Potapenko, Stephen Hines, Andrey Konovalov, Mitch Phillips, Ivan Lozano, Kostya Kortchinsky, Christopher Ferris, Cindy Zhou, Evgenii Stepanov, Kevin Deus, Peter Collingbourne, Elliott Hughes, Kees Cook and Ken Chen for their contributions to this post.

Read More

SpineNet: A Novel Architecture for Object Detection Discovered with Neural Architecture Search

Posted by Xianzhi Du, Software Engineer and Jaeyoun Kim, Technical Program Manager, Google Research

Convolutional neural networks created for image tasks typically encode an input image into a sequence of intermediate features that capture the semantics of an image (from local to global), where each subsequent layer has a lower spatial dimension. However, this scale-decreased model may not be able to deliver strong features for multi-scale visual recognition tasks where recognition and localization are both important (e.g., object detection and segmentation). Several works including FPN and DeepLabv3+ propose multi-scale encoder-decoder architectures to address this issue, where a scale-decreased network (e.g., a ResNet) is taken as the encoder (commonly referred to as a backbone model). A decoder network is then applied to the backbone to recover the spatial information.

While this architecture has yielded improved success for image recognition and localization tasks, it still relies on a scale-decreased backbone that throws away spatial information by down-sampling, which the decoder then must attempt to recover. What if one were to design an alternate backbone model that avoids this loss of spatial information, and is thus inherently well-suited for simultaneous image recognition and localization?

In our recent CVPR 2020 paper “SpineNet: Learning Scale-Permuted Backbone for Recognition and Localization”, we propose a meta architecture called a scale-permuted model that enables two major improvements on backbone architecture design. First, the spatial resolution of intermediate feature maps should be able to increase or decrease anytime so that the model can retain spatial information as it grows deeper. Second, the connections between feature maps should be able to go across feature scales to facilitate multi-scale feature fusion. We then use neural architecture search (NAS) with a novel search space design that includes these features to discover an effective scale-permuted model. We demonstrate that this model is successful in multi-scale visual recognition tasks, outperforming networks with standard, scale-reduced backbones. To facilitate continued work in this space, we have open sourced the SpineNet code to the Tensorflow TPU GitHub repository in Tensorflow 1 and TensorFlow Model Garden GitHub repository in Tensorflow 2.

A scale-decreased backbone is shown on the left and a scale-permuted backbone is shown on the right. Each rectangle represents a building block. Colors and shapes represent different spatial resolutions and feature dimensions. Arrows represent connections among building blocks.

Design of SpineNet Architecture
In order to efficiently design the architecture for SpineNet, and avoid a time-intensive manual search of what is optimal, we leverage NAS to determine an optimal architecture. The backbone model is learned on the object detection task using the COCO dataset, which requires simultaneous recognition and localization. During architecture search, we learn three things:

  • Scale permutations: The orderings of network building blocks are important because each block can only be built from those that already exist (i.e., with a “lower ordering”). We define the search space of scale permutations by rearranging intermediate and output blocks, respectively.
  • Cross-scale connections: We define two input connections for each block in the search space. The parent blocks can be any block with a lower ordering or a block from the stem network.
  • Block adjustments (optional): We allow the block to adjust its scale level and type.
The architecture search process from a scale-decreased backbone to a scale-permuted backbone.

Taking the ResNet-50 backbone as the seed for the NAS search, we first learn scale-permutation and cross-scale connections. All candidate models in the search space have roughly the same computation as ResNet-50 since we just permute the ordering of feature blocks to obtain candidate models. The learned scale-permuted model outperforms ResNet-50-FPN by +2.9% average precision (AP) in the object detection task. The efficiency can be further improved (-10% FLOPs) by adding search options to adjust scale and type (e.g., residual block or bottleneck block, used in the ResNet model family) of each candidate feature block.

We name the learned 49-layer scale-permuted backbone architecture SpineNet-49. SpineNet-49 can be further scaled up to SpineNet-96/143/190 by repeating blocks two, three, or four times and increasing the feature dimension. An architecture comparison between ResNet-50-FPN and the final SpineNet-49 is shown below.

The architecture comparison between a ResNet backbone (left) and the SpineNet backbone (right) derived from it using NAS.

Performance
We demonstrate the performance of SpineNet models through comparison with ResNet-FPN. Using similar building blocks, SpineNet models outperform their ResNet-FPN counterparts by ~3% AP at various scales while using 10-20% fewer FLOPs. In particular, our largest model, SpineNet-190, achieves 52.1% AP on COCO for a single model without multi-scale testing during inference, significantly outperforming prior detectors. SpineNet also transfers to classification tasks, achieving 5% top-1 accuracy improvement on the challenging iNaturalist fine-grained dataset.

Performance comparisons of SpineNet models and ResNet-FPN models adopting the RetinaNet detection framework on COCO bounding box detection.
Performance comparisons of SpineNet models and ResNet models on ImageNet classification and iNaturalist fine-grained image classification.

Conclusion
In this work, we identify that the conventional scale-decreased model, even with a decoder network, is not effective for simultaneous recognition and localization. We propose the scale-permuted model, a new meta-architecture, to address the issue. To prove the effectiveness of scale-permuted models, we learn SpineNet by Neural Architecture Search in object detection and demonstrate it can be used directly in image classification. In the future, we hope the scale-permuted model will become the meta-architecture design of backbones across many visual tasks beyond detection and classification.

Acknowledgements
Special thanks to the co-authors of the paper: Tsung-Yi Lin, Pengchong Jin, Golnaz Ghiasi, Mingxing Tan, Yin Cui, Quoc V. Le, and Xiaodan Song. We also would like to acknowledge Yeqing Li, Youlong Cheng, Jing Li, Jianwei Xie, Russell Power, Hongkun Yu, Chad Richards, Liang-Chieh Chen, Anelia Angelova, and the larger Google Brain Team for their help.

Read More

Stadia Savepoint: June updates

With June coming to an end, it’s time for another update in our Stadia Savepoint series. Here are the updates we’ve made this month to the Stadia platform:

Touch controls on mobile

Access touch controls within any game on your mobile device when a controller is not already connected.

Expanded OnePlus compatibility

Stadia is now compatible with OnePlus 5, OnePlus 6, and OnePlus 7 series mobile devices. More info here

Per-device resolution settings

Added the ability to set your preferred resolution on each device that you play Stadia on. 

Experiments tab supports additional mobile devices

Any Android phone that can install the Stadia app can play games using the Experiments tab in the settings menu. 

Wireless Stadia Controller functionality on mobile

We’re rolling out support for wireless play using the Stadia Controller on your mobile device. Just link your Stadia Controller to your phone by following the linking code shown on your screen.

This month, players adventured across the lands of Tamriel in The Elder Scrolls Online and learned how to pull off trick combos on boats in Wave Break, in addition to many other games now available for purchase on the Stadia store. We also announced new games coming to Stadia, including the survival adventure Windbound on August 28 and a chance to enter a world inspired by classic JRPGs with Cris Tales on November 17.

If you sign up for Stadia, you’ll get one free month of Stadia Pro and instant access to eighteen games, including PLAYERUNKNOWN’S BATTLEGROUNDS, Zombie Army 4: Dead War, Destiny 2: The Collection, and The Elder Scrolls Online. In addition, if you’ve ever signed up for Stadia Pro, you’ll receive $10 off on your next purchase of any game from the Stadia store. 

Start playing Stadia on your TV for $99.99 with the new Stadia Premiere Edition, complete with a Stadia Controller and Chromecast Ultra. 

Stadia Pro updates

Recent content launches on Stadia

New games coming to Stadia

That’s it for June—we’ll be back soon to share more updates. As always, stay tuned to the Stadia Community Blog, Facebook, and Twitter for the latest news.

Read More

Announcing Enhanced Smart Home Analytics

Posted by Toni Klopfenstein, Developer Advocate

When creating scalable applications, consistent and reliable monitoring of resources is a valuable tool for any developer. Today we are releasing enhanced analytics and logging for Smart Home Actions. This feature enables you to more quickly identify and respond to errors or quality issues that may arise.

Request Latency Dashboard

You can now access the smart home dashboard with pre-populated metrics charts for your Actions on the Analytics tab in the Actions Console, or through Cloud Monitoring. These metrics help you quantify the health and usage of your Action, and gain insight into how users engage with your Action. You can view:

  • Execution types and device traits used
  • Daily users and request counts
  • User query response latency
  • Success rate for Smart Home engagements
  • Comparison of cloud and local fulfilment interactions

Successful Requests Dashboard

Cloud Logging provides detailed logs based on the events observed in Cloud Monitoring.

We’ve added additional features to the error logs to help you quickly debug why intents fail, which particular device commands malfunction, or if your local fulfilment falls back to cloud fulfilment.

New details added to the event logs include:

  • Cloud vs. local fulfilment
  • EXECUTE vs. QUERY intents
  • Locale of request
  • Device Type

You can additionally export these logs through Cloud Pub/Sub, and build log-based metrics and alerts for your development teams to gain insights into common issues.

For more guidance on accessing your Smart Home Action analytics and logs, check out the developer guide or watch the video.

We want to hear from you! Continue sharing your feedback with us through the issue tracker, and engage with other smart home developers in the /r/GoogleAssistantDev community. Follow @ActionsOnGoogle on Twitter for more of our team’s updates, and tweet using #AoGDevs to share what you’re working on. We can’t wait to see what you build!

Read More

To our YouTube TV members: an update to our content and price

In 2017, we introduced YouTube TV, live TV designed for the YouTube generation  those who want to stream TV when and how they want, without commitments. We’ve just passed the three-year mark, so I wanted to take this opportunity to update you on how we’re thinking about YouTube TV.

Since launch, we’ve listened to your feedback and worked to build an experience that fits the needs of everyone in your family, by adding highly-requested content like PBS and Discovery Network brands, including HGTV and Food Network, and launching new features to reinvent how you watch live TV.

As we continue to build a best-in-class experience for you, we have a few updates to share: new content launching today, new features we’ve recently introduced, and an updated price.

More content to enjoy, starting today

Earlier this year, we let you know that we’d soon be adding more of ViacomCBS’s family of brands to YouTube TV, which includes 8 of your favorite channels launching today: BET, CMT, Comedy Central, MTV, Nickelodeon, Paramount Network, TV Land and VH1.

That means you can follow the biggest stories in news, politics and pop culture with “The Daily Show with Trevor Noah;” catch up with Catelynn, Cheyenne, Maci, Mackenzie and Amber on “Teen Mom OG;” join the search for America’s next drag superstar with “RuPaul’s Drag Race;” go on an adventure with “SpongeBob SquarePants;” and follow the fictional lives of the Dutton family on the new season of “Yellowstone,” airing now.

BET Her, MTV2, MTV Classic, Nick Jr., NickToons, and TeenNick are also set to come to YouTube TV at a later date.

In addition to our base plan of now more than 85+ channels, we also recently introduced Cinemax and HBO Max, which includes all of HBO plus a robust library of content and original series, to our growing list of add-on channels, making YouTube TV your one-stop shop for entertainment.

The latest features to try while you sit back with your favorite shows

We’re always listening to our member’s feedback on the channels they want to see on YouTube TV, but we’re also continuously building new features and making improvements that reinvent how you watch TV and interact with your favorite content on YouTube TV. Here are just a few of those features and updates we’ve launched recently:

  • Jump to the news that matters most to you: We’ve been testing a new feature that allows you to jump to various segments within select news programs on YouTube TV, and have just brought this to all users. Similar to our key plays view for sports, on some programs you’ll be able to jump to specific news clips within the complete recording. This feature is available on TV screens now and will come to mobile devices in the coming weeks.
  • Control over your recorded content: In addition to your unlimited DVR space, YouTube TV members can pause, rewind, and fast forward through all their recorded shows, regardless of network.
  • Go easy on the eyes with Dark Mode: We recently introduced a dark theme to both desktop and mobile devices to help tone down your screen’s glare and experience YouTube TV with a dark background.
  • Mark a show as watched: You now have an option to select “Mark as Watched” on desktop and mobile devices for any TV show you’ve already seen, a top requested feature from our members.
  • A fresh new look for the Live Guide: Based on your feedback, we’ve updated the Live Guide on desktop so you can see what’s on, now, and also scroll ahead 7 days into the future.

An update to your price

As we continue to evaluate how to provide the best possible service and content for you, our membership price will be $64.99. This new price takes effect today, June 30, for new members. Existing subscribers will see these changes reflected in their subsequent billing cycle on or after July 30.

We don’t take these decisions lightly, and realize how hard this is for our members. That said, this new price reflects the rising cost of content and we also believe it reflects the complete value of YouTube TV, from our breadth of content to the features that are changing how we watch live TV. YouTube TV is the only streaming service that includes a DVR with unlimited storage space, plus 6 accounts per household each with its own unique recommendations, and 3 concurrent streams. It’s all included in the base cost of YouTube TV, with no contract and no hidden fees.

While we would love every member to continue to stay with our service, we understand that some of you may choose to pause or cancel your membership. We want to make YouTube TV flexible for you, so members can pause or cancel anytime here.

As the streaming industry continues to evolve, we are working to build new flexible models for YouTube TV users, so we can continue to provide a robust and innovative experience for everyone in your household without the commitments of traditional TV.

Thank you for being a part of the YouTube TV family. We’ll continue to work to make it the best place to watch live TV, how you want it.

Christian Oestlien, Vice President of Product Management, YouTube TV

Read More

Connected Sheets now generally available, replacing Sheets data connector

What’s changing

We’re making Connected Sheets generally available to G Suite Enterprise and G Suite Enterprise for Education customers. Connected Sheets helps you analyze BigQuery data in Google Sheets. It was previously available in beta. Connected Sheets will replace Sheets data connector, a more limited way to connect Sheets and BigQuery.

Read more about how you can use it to analyze petabytes of data with Google Sheets in our Cloud Blog post.

Who’s impacted

End users

Why you’d use it

Connected Sheets links Google Sheets to BigQuery, so you can analyze large BigQuery datasets using familiar spreadsheet tools and operations. This means users don’t need to know SQL and can generate insights with basic spreadsheet operations like formulas, charts, and pivot tables.

This makes it easier for more members of your organization to understand, collaborate on, and generate insights from data. Specifically, it can help subject matter experts work with data without relying on analysts, who may be less familiar with the context of the data or be overloaded with a wide range of data requests.

Connected Sheets includes all the capabilities of the legacy Sheets data connector with additional enhancements. Enhancements include the ability to analyze and visualize data in Sheets without needing to first extract the data, being able to see a preview of data through a Sheet, and scheduling data refreshes to avoid analyzing stale data.

Learn more about how you can analyze petabytes of data with Google Sheets on the Cloud Blog

Getting started

  • Admins: No action required, Connected Sheets will be ON by default. To use it, you must have set up BigQuery for your organization, and users must have access to tables or views in BigQuery. Use our Help Center to learn more about how to set up Connected Sheets.
  • End users: This feature will be ON by default. To use it, must have access to tables or views in BigQuery. Use our Help Center to learn more about Connected Sheets.

Rollout pace

  • Rapid and Scheduled Release domains: Extended rollout (potentially more than 15 days for feature visibility) starting on June 30, 2020. We expect rollout to complete within a month. 

Availability 

  • Available to G Suite Enterprise and G Suite Enterprise for Education customers* 
  • Not available to G Suite Basic, G Suite Business, G Suite for Education, and G Suite for Nonprofits customers 

Resources 

Roadmap 

*Availability in alternative packages is variable and based on your services.

Read More

One percent of Googlers get to visit a data center, but I did

For years I’ve wondered what it’s like behind the protected walls of a Google data center, and I’m not alone. In my job at Google, I spend my days working with developers. Our data centers are crucial to the work that they do, but most have never actually set foot inside a data center. And until recently, neither had I. I went on a mission to find answers to common questions like: Why are visits so tightly restricted? How secure is a Google data center? How do we meet regulatory requirements? Here’s what I found out.

To keep our customers’ data safe, we need to make sure the physical structure of the data center is absolutely secure. Each data center is protected with six layers of physical security designed to thwart unauthorized access. Watch the video above to follow my journey through these layers to the core of a data center, and read on to learn even more.

“Least privilege” is the rule to live by

badge swipe

There are two rules strictly enforced at all Google data centers. The “least privilege” protocol is the idea that someone should have only the bare minimum privileges necessary to perform their job. If your least privilege is to enter Layer 2, you won’t have luck moving to Layer 3. Each person’s access permissions are checked at badge readers that exist at every access point in a data center facility. Authorization measures happen everywhere using this protocol. 

Another rule exists that prevents a vehicle or individual closely following another to gain entry into a restricted area without a badge swipe. If the system detects a door open for too long, it immediately alerts security personnel. Any gate or door must close before the next vehicle or person can badge in and gain access.

Two security checks: badge first, then circle lock

circle lock

You’ve probably seen dual-authentication when you try to sign into an account and a one-time password is sent to your phone. We take a similar approach at the data centers to verify a person’s identity and access. At some layers in the data center, you’re required to swipe your badge, then enter a circle lock, or tubular doorway. You walk into a special “half portal” that checks your badge and scans your eyes to gain access to the next layer of the data center. It prevents tailgating because only one person is allowed in the circle lock at a time.

Shipments are received through a secure loading dock

The facility loading docks are a special section of Layer 3, used to receive and send shipments of materials, such as new hardware. Truck deliveries must be approved for access to Layer 3 to enter the dock. For further security, the loading dock room is physically isolated from the rest of the data center, and guard presence is required when a shipment is received or sent.

All hard drives are meticulously tracked

hard drive

Hard drive tracking is important to the security of your data because hard drives contain encrypted sensitive information. Google meticulously tracks the location and status of every hard drive within our data centers—from acquisition to destruction—using barcodes and asset tags. These asset tags are scanned throughout a hard drive’s lifecycle in a data center from the time it’s installed to the time it’s removed from circulation. Tracking hard drives closely ensures they don’t go missing or end up in the wrong hands.

We also make sure hard drives are properly functioning by doing frequent performance tests. If a component fails to pass a performance test, it’s deemed no longer usable. To prevent any sensitive information from living on that disk, we remove it from inventory to be erased and destroyed in Layer 6, Disk Erase. There, the disk erase formatter uses a multi-step process that wipes the disk data and replaces each bit of data with zeros. If the drive can’t be erased for any reason, it’s stored securely until it can be physically destroyed. 

Layered security extends into the tech itself

Our layered security approach isn’t just a physical safeguard for entering our data centers. It’s also how we protect the hardware and software that live in our data centers. At the deepest layer, most of our server boards and networking equipment are custom-designed by Google. For example, we design chips, such as the Titan hardware security chip, to securely identify and authenticate legitimate Google hardware. 

At the storage layer, data is encrypted while it travels in and out of the data center and when it’s stored at the data center. This means whether data is traveling over the internet moving between Google’s facilities, or stored on our servers, it’s protected. Google Cloud customers can even supply their own encryption keys and manage them in a third-party key management system deployed outside Google’s infrastructure. This defense-in-depth approach helps to expand our ability to mitigate potential vulnerabilities at every point

To learn more about our global data centers, visit our Data and Security page. We will also be sharing more about our security best practices during the upcoming Google Cloud Next ’20: OnAir event.

Read More

Analyzing petabytes of data just got easier, with Google Sheets

At Google Cloud, we believe everyone—not just those who specialize in writing complex queries—should be able to harness the power of data. As businesses adapt to new realities, it’s important that employees have access to useful data, so they can make informed decisions quickly and produce better business results. To help deliver on this vision, we’re making it easy for anyone to work with massive datasets in Google Sheets and we’re adding new, intelligent features to help automate data preparation and analysis.

Better together: BigQuery + Google Sheets 

Today, we‘re announcing the general availability of Connected Sheets, which provides the power and scale of a BigQuery data warehouse in the familiar context of Sheets. Connected Sheets enables people to analyze billions of rows and petabytes of data in Sheets—without requiring specialized knowledge of computer languages like SQL. A live connection between BigQuery and Sheets means your data stays fresh and protected by Google’s security architecture, unlike with desktop spreadsheet applications. People across your organization can apply familiar Sheets tools like pivot tables, charts, and formulas to big data, quickly deriving insights and reducing their dependence on specialized analysts.

ConnectedSheetsSetup.gif

Through our beta program, several customers have already experienced the benefits of Connected Sheets. PwC, a global professional services organization, uses Connected Sheets as part of its efforts to make technology and data more accessible across its workforce. Says Peter Van Nieuwerburgh, Global Change Manager at PwC, “If you look at our own adoption dashboarding, it’s more than three terabytes of data—good luck putting that in any spreadsheet. With Connected Sheets, we’re not really pulling the data into the spreadsheet, rather it lives in the database where it belongs. The ability to go and so easily analyze and visualize the data is really powerful.”

PwC scales data insights across the workforce with Connected Sheets, which enables people to analyze billions of rows of BigQuery data from the familiar Google Sheets interface--without needing SQL knowledge.

Get to insights faster

We continue to build Google AI natively into Sheets, so it’s easy for everyone—not just specialized analysts—to quickly make data-backed decisions. For example, you can ask questions about your data in plain English or see suggested charts and pivot tables. Today we’re announcing upcoming new features that leverage the power of Google AI to get you to insights even faster: 

  • Automate data entry: Later this year we’ll be launching Sheets Smart Fill, which detects and learns patterns between columns to intelligently autocomplete data for you. Say you have a column of full names, but you want to split it into two columns (first and last name, for example). As you start typing first names into a column, Sheets will automatically detect the pattern, generate the corresponding formula, and then autocomplete the rest of the column for you. Similar to how Smart Compose in Gmail helps you write faster with fewer mistakes, Sheets Smart Fill makes data entry quicker and less error prone.
smartfill_short_demo.gif
  • Make confident decisions, with clean data: Before making critical decisions, it’s important to ensure your data is consistent and error-free. We’ll soon be introducing Sheets Smart Cleanup to make data cleanup faster and more accurate. Upon data import, Sheets will surface intelligent suggestions in the side panel; it could, for example, help you identify and fix duplicate rows and number-formatting issues. You’ll also see column stats that provide a quick snapshot of your data, such as the distribution of values or most frequent value in a column, helping you quickly catch potential outliers and confidently move on to analysis.
suggestionsAndStatsSmall.gif

Connected Sheets will start rolling out to G Suite Enterprise, G Suite Enterprise for Education, and G Suite Enterprise Essentials customers today—learn more here. You can also watch this feature overview and in-depth product tutorial. Sheets Smart Fill and Smart Cleanup will be available to G Suite customers later this year.

Read More