Google News App

Google Research: Looking Back at 2020, and Forward to 2021

Posted by Jeff Dean, Senior Fellow and SVP of Google Research and Health, on behalf of the entire Google Research community

When I joined Google over 20 years ago, we were just figuring out how to really start on the journey of making a high quality and comprehensive search service for information on the web, using lots of curiously wired computers. Fast forward to today, and while we’re taking on a much broader array of technical challenges, it’s still with the same overarching goal of organizing the world’s information and making it universally accessible and useful. In 2020, as the world has been reshaped by COVID-19, we saw the ways research-developed technologies could help billions of people better communicate, understand the world, and get things done. I’m proud of what we’ve accomplished, and excited about new possibilities on the horizon.

The goal of Google Research is to work on long-term, ambitious problems across a wide range of important topics — from predicting the spread of COVID-19, to designing algorithms, to learning to translate more and more languages automatically, to mitigating bias in ML models. In the spirit of our annual reviews for 2019, 2018, and more narrowly focused reviews of some work in 2017 and 2016, this post covers key Google Research highlights from this unusual year. This is a long post, but grouped into many different sections. Hopefully, there’s something interesting in here for everyone! For a more comprehensive look, please see our >750 research publications in 2020.

COVID-19 and Health
As the impact of COVID-19 took a tremendous toll on people’s lives, researchers and developers around the world rallied together to develop tools and technologies to help public health officials and policymakers understand and respond to the pandemic. Apple and Google partnered in 2020 to develop the Exposure Notifications System (ENS), a Bluetooth-enabled privacy-preserving technology that allows people to be notified if they have been exposed to others who have tested positive for COVID-19. ENS supplements traditional contact tracing efforts and has been deployed by public health authorities in more than 50 countries, states and regions to help curb the spread of infection.

In the early days of the pandemic, public health officials signalled their need for more comprehensive data to combat the virus’ rapid spread. Our Community Mobility Reports, which provide anonymized insights into movement trends, are helping researchers not only understand the impact of policies like stay-at-home directives and social distancing, and also conduct economic forecasting.

Community Mobility Reports: Navigate and download a report for regions of interest.

Our own researchers have also explored using this anonymized data to forecast COVID-19 spread using graph neural networks instead of traditional time series-based models.

Although the research community knew little about this disease and secondary effects initially, we’re learning more every day. Our COVID-19 Search Trends symptoms allows researchers to explore temporal or symptomatic associations, such as anosmia — the loss of smell that is sometimes a symptom of the virus. To further support the broader research community, we launched Google Health Studies app to provide the public ways to participate in research studies.

Our COVID-19 Search Trends are helping researchers study the link between the disease’s spread and symptom-related searches.

Teams across Google are contributing tools and resources to the broader scientific community, which is working to address the health and economic impacts of the virus.

A spatio-temporal graph for modelling COVID-19 Spread.

Accurate information is critical in dealing with public health threats. We collaborated with many product teams at Google in order to improve information quality about COVID-19 in Google News and Search through supporting fact checking efforts, as well as similar efforts in YouTube.

We helped multilingual communities get equal access to critical COVID-19 information by sponsoring localization of Nextstrain.org’s weekly Situation Reports and developing a COVID-19 open source parallel dataset in collaboration with Translators Without Borders.

Modelling a complex global event is particularly challenging and requires more comprehensive epidemiological datasets, the development of novel interpretable models and agent-based simulators to inform the public health response. Machine learning techniques have also helped in other ways from deploying natural language understanding to helping researchers quickly navigate the mountains of COVID-19 scientific literature, applying anonymization technology to protect privacy while making useful datasets available, and exploring whether public health can conduct faster screening with fewer tests via Bayesian group testing.

These are only a sample of the many pieces of work that happened across Google to help users and public health authorities respond to COVID-19. For more, see using technology to help take on COVID-19.

Research in Machine Learning for Medical Diagnostics
We continue to make headway helping clinicians harness the power of ML to deliver better care for more patients. This year we have described notable advances in applying computer vision to aid doctors in the diagnosis and management of cancer, including helping to make sure that doctors don’t miss potentially cancerous polyps during colonoscopies, and showing that an ML system can achieve substantially higher accuracy than pathologists in Gleason grading of prostate tissue, enabling radiologists to achieve significant reductions in both false negative and false positive results when examining X-rays for signs of breast cancer.

To determine the aggressiveness of prostate cancers, pathologists examine a biopsy and assign it a Gleason grade. In published research, our system was able to grade with higher accuracy than a cohort of pathologists who have not had specialist training in prostate cancer. The first stage of the deep learning system assigns a Gleason grade to every region in a biopsy. In this biopsy, green indicates Gleason pattern 3, while yellow indicates Gleason pattern 4.

We’ve also been working on systems to help identify skin disease, help detect age-related macular degeneration (the leading cause of blindness in the U.S. and U.K., and the third-largest cause of blindness worldwide), and on potential novel non-invasive diagnostics (e.g., being able to detect signs of anemia from retinal images).

Our study examines how a deep learning model can quantify hemoglobin levels — a measure doctors use to detect anemia — from retinal images.

This year has also brought exciting demonstrations of how these same technologies can peer into the human genome. Google’s open-source tool, DeepVariant, identifies genomic variants in sequencing data using a convolutional neural network, and this year won the FDA Challenge for best accuracy in 3 out of 4 categories. Using this same tool, a study led by the Dana-Farber Cancer Institute improved diagnostic yield by 14% for genetic variants that lead to prostate cancer and melanoma in a cohort of 2,367 cancer patients.

Research doesn’t end at measurement of experimental accuracy. Ultimately, truly helping patients receive better care requires understanding how ML tools will affect people in the real world. This year we began work with Mayo Clinic to develop a machine learning system to assist in radiotherapy planning and to better understand how this technology could be deployed into clinical practice. With our partners in Thailand, we’ve used diabetic eye disease screening as a test case in how we can build systems with people at the center, and recognize the fundamental role of diversity, equity, and inclusion in building tools for a healthier world.

Weather, Environment and Climate Change
Machine learning can help us better understand the environment and make useful predictions to help people in both their everyday life as well as in disaster situations. For weather and precipitation forecasting, computationally intensive physics-based models like NOAA’s HRRR have long reigned supreme. We have been able to show, though, that ML-based forecasting systems can predict current precipitation with much better spatial resolution (“Is it raining in my local park in Seattle?” and not just “Is it raining in Seattle?”) and can produce short-term forecasts of up to eight hours that are considerably more accurate than HRRR, and can compute the forecast more quickly, yet with higher temporal and spatial resolution.

A visualization of predictions made over the course of roughly one day. Left: The 1-hour HRRR prediction made at the top of each hour, the limit to how often HRRR provides predictions. Center: The ground truth, i.e., what we are trying to predict. Right: The predictions made by our model. Our predictions are every 2 minutes (displayed here every 15 minutes) at roughly 10 times the spatial resolution made by HRRR. Notice that we capture the general motion and general shape of the storm.

We’ve also developed an improved technique called HydroNets, which uses a network of neural networks to model the actual river systems in the world to more accurately understand the interactions of upstream water levels to downstream inundation, resulting in more accurate water-level predictions and flood forecasting. Using these techniques, we’ve expanded our coverage of flood alerts by 20x in India and Bangladesh, helping to better protect more than 200 million people in 250,000 square kilometers.

An illustration of the HydroNets architecture.

Better analysis of satellite imagery data can also give Google users a better understanding of the impact and extent of wildfires (which caused devastating effects in California and Australia this year). We showed that automated analysis of satellite imagery can help with rapid assessment of damage after natural disasters even with limited prior satellite imagery. It can also aid urban tree-planting efforts by helping cities assess their current tree canopy coverage and where they should focus on planting new trees. We’ve also shown how machine learning techniques that leverage temporal context can help improve ecological and wildlife monitoring.

Based on this work, we’re excited to partner with NOAA on using AI and ML to amplify NOAA’s environmental monitoring, weather forecasting and climate research using Google Cloud’s infrastructure.

Accessibility
Machine learning continues to provide amazing opportunities for improving accessibility, because it can learn to transfer one kind of sensory input into others. As one example, we released Lookout, an Android application that can help visually impaired users by identifying packaged foods, both in a grocery store and also in their kitchen cupboard at home. The machine learning system behind Lookout demonstrates that a powerful-but-compact machine learning model can accomplish this in real-time on a phone for nearly 2 million products.

Similarly, people who communicate with sign language find it difficult to use video conferencing systems because even if they are signing, they are not detected as actively speaking by audio-based speaker detection systems. Developing Real-Time, Automatic Sign Language Detection for Video Conferencing presents a real-time sign language detection model and demonstrates how it can be used to provide video conferencing systems with a mechanism to identify the person signing as the active speaker.

We also enabled useful Android accessibility capabilities such as Voice Access and Sound Notifications for important household sounds.

Live Caption was expanded to support calls on the Pixel phone with the ability to caption phone calls and video calls. This came out of the Live Relay research project, which enables deaf and hard of hearing people to make calls without assistance.

Applications of ML to Other Fields
Machine learning continues to prove vital in helping us make progress across many fields of science. In 2020, in collaboration with the FlyEM team at HHMI Janelia Research Campus, we released the drosophila hemibrain connectome, the large synapse-resolution map of brain connectivity, reconstructed using large-scale machine learning models applied to high-resolution electron microscope imaging of brain tissue. This connectome information will aid neuroscientists in a wide variety of inquiries, helping us all better understand how brains function. Be sure to check out the very fly interactive 3-D UI!

The application of ML to problems in systems biology is also on the rise. Our Google Accelerated Science team, in collaboration with our colleagues at Calico, have been applying machine learning to yeast, to get a better understanding of how genes work together as a whole system. We’ve also been exploring how to use model-based reinforcement learning in order to design biological sequences like DNA or proteins that have desirable properties for medical or industrial uses.…

Expanding the Gmail delegate limit

What’s changing 

We’re expanding the number of allowed Gmail delegates from 25. In addition, to help with delegate management, delegation for Contacts is now available via the Contacts API

Who’s impacted 

Admins, end users, and developers 

Why you’d use it 

We’ve heard from you that allowing more than 25 users to access a single mailbox can sometimes help manage the messages. For example, in certain cases, the volume and variety of inquiries necessitated having a larger group of individual users be able to read and respond to them. To address this need, we‘ve enabled higher limits. 
Additionally, we’re now offering programmatic access to managing Contacts delegation, similar to how Gmail delegation can be managed via the API. Note that delegation limits for Contacts delegation are not changing. 

Additional details 

Where previously we advised that a technical limit of 25 delegates applied for email delegation, our updated guidance is as follows: 
  • With typical usage, up to 40 of those configured users may concurrently access the account. However, please note that 
  • All other Gmail limits and policies continue to apply. 
  • Heavy usage by any user will reduce the overall concurrency limit. This will most commonly occur if automation or scripting (e.g. via Chrome extension or API) is used to engage with the mailbox via high frequency actions. 
  • Up to 1000 delegates may be configured as delegates for the purposes of setting up access control. 
  • For high-volume operations such as sales or support teams, we continue to recommend the use of a dedicated ticketing system. Gmail is not intended to replace a ticketing system for scaled operations. 
  • Contacts delegation is still limited to 25 delegates per user.  
  • Getting started 

    Rollout pace 

    Availability 

    • Available to Workspace Essentials, Business Starter, Business Standard, Business Plus, Enterprise Essentials, Enterprise Standard, and Enterprise Plus, as well as G Suite Basic, Business, Education, Enterprise for Education, and Nonprofits customers 
    • Not available to users with personal Google Accounts 

    Resources 

    Read More

    Solve for the United Nations’ Sustainable Development Goals with Google technologies in this year’s Solution Challenge.

    Posted by Erica Hanson, Global Program Manager, Google Developer Student Clubs

    Solution Challenge image

    Created by the United Nations in 2015 to be achieved by 2030, the 17 Sustainable Development Goals (SDGs) agreed upon by all 193 United Nations Member States aim to end poverty, ensure prosperity, and protect the planet.

    Last year brought many challenges, but it also brought a greater spirit around helping each other and giving back to our communities. With that in mind, we invite students around the world to join the Google Developer Student Clubs 2021 Solution Challenge!

    If you’re new to the Solution Challenge, it is an annual competition that invites university students to develop solutions for real world problems using one or more Google products or platforms.

    This year, see how you can use Android, TensorFlow, Google Cloud, Flutter, or any of your favorite Google technologies to promote employment for all, economic growth, and climate action, by building a solution for one or more of the UN Sustainable Development Goals.

    What winners of the Solution Challenge receive

    Participants will receive specialized prizes at different stages:

    1. The Top 50 teams will receive mentorship from Google and other experts to further work on their projects.
    2. The Top 10 finalists will receive a 1-year subscription to Pluralsight, swag, additional customized mentoring from Google, and a feature in the Google Developers Blog and Demo Day live on YouTube.
    3. The 3 Grand Prize Winners will receive all the prizes included in the Top 10 category along with a Chromebook and a private team meeting with a Google executive.

    How to get started on the Solution Challenge

    There are four main steps to joining the Solution Challenge and getting started on your project:

    1. Register at goo.gle/solutionchallenge and join a Google Developer Student Club at your college or university. If there is no club at your university, you can join the closest one through the event platform.
    2. Select one or more of the United Nations 17 Sustainable Development Goals to solve for.
    3. Build a solution using Google technology.
    4. Create a demo and submit your project by March 31, 2021.

    Resources from Google for Solution Challenge participants

    Google will provide Solution Challenge participants with various resources to help students build strong projects for their contest submission.

    • Live online sessions with Q&As
    • Mentorship from Google, Google Developer Experts, and the Developer Student Club community
    • Curated codelabs designed by Google Developers
    • Access to Design Sprint guidelines developed by Google Ventures
    • and more!

    When are winners announced?

    Once all the projects are submitted after the March 31st deadline, judges will evaluate and score each submission from around the world using the criteria listed on the website. From there, winning solutions will be announced in three rounds.

    Round 1 (May): The Top 50 teams will be announced.

    Round 2 (July): After the top 50 teams submit their new and improved solutions, 10 finalists will be announced.

    Round 3 (August): In the finale, the top 3 grand prize winners will be announced live on YouTube during the 2021 Solution Challenge Demo Day.

    With a passion for building a better world, savvy coding skills, and a little help from Google, we can’t wait to see the solutions students create.

    Learn more and sign up for the 2021 Solution Challenge, here.

    Read More

    4 best practices for ensuring privacy and security of your data in Cloud Storage

    Cloud storage enables organizations to reduce costs and operational burden, scale faster, and unlock other cloud computing benefits. At the same time, they must also ensure they meet privacy and security requirements to restrict access and protect sensitive information. 

    Security is a common concern we hear from companies as they move their data to the cloud, and it’s a top priority for all our products. Cloud Storage offers simple, reliable, and cost-effective storage and retrieval of any amount of data at any time, with built-in security capabilities such as encryption in transit and at rest and a range of encryption key management options, including Google-managed, customer-supplied, customer-managed and hardware security modules. Google has one of the largest private networks in the world, minimizing exposure of your data to the public internet when you use Cloud Storage. 

    Best practices for securing your data with Cloud Storage

    Securing enterprise storage data requires planning ahead to protect your data from future threats and new challenges. Beyond the fundamentals, Cloud Storage offers several security features, such as uniform bucket-level access, service account HMAC keys, IAM conditions, Delegation tokens, and V4 signatures. 

    We wanted to share some security best practices for using these features to help secure and protect your data at scale: 

    #1: Use org policies to centralize control and define compliance boundaries
    Cloud Storage, just like Google Cloud, follows a resource hierarchy. Buckets hold objects, which are associated with projects, which are then tied to organizations. You can also use folders to further separate project resources. Org policies are settings that you can configure at the org, folder, or project level to enforce service-specific behaviors. 

    Here are two org policies we recommend enabling: 

    • Domain-restricted sharing—This policy prevents content from being shared with people outside your organization. For example, if you tried to make the contents of a bucket available to the public internet, this policy would block that operation. 

    • Uniform bucket-level access—This policy simplifies permissions and helps manage access control at scale. With this policy, all newly created buckets have uniform access control configured at the bucket level governing access for all the underlying objects. 

    #2: Consider using Cloud IAM to simplify access control  
    Cloud Storage offers two systems for granting permissions to your buckets and objects: Cloud IAM and Access Control Lists (ACLs). For someone to access a resource, only one of these systems needs to grant permissions. 

    ACLs are object-level and grant access to individual objects. As the number of objects in a bucket increases, so does the overhead required to manage individual ACLs. It becomes difficult to assess how secure all the objects are within a single bucket. Imagine having to iterate across millions of objects to see if a single user has the correct access. 

    We recommend using Cloud IAM to control access to your resources. Cloud IAM enables a Google Cloud wide, platform centric, uniform mechanism to manage access control for your Cloud Storage data. When you enable uniform bucket-level access control, object ACLs are disallowed, and Cloud IAM policies  at the bucket level are used to manage access—so permissions granted at a bucket-level automatically apply to all the objects in a bucket.

    #3: If you can’t use IAM Policies, consider other alternatives to ACLs 
    We recognize that sometimes our customers continue to use ACLs for different reasons, such as multi-cloud architectures or sharing an object with an individual user. However, we don’t recommend putting end users on object ACLs. 

    Consider these alternatives instead: 

    • Signed URLs—Signed URLs allow you to delegate time-limited access to your Cloud Storage resources. When you generate a signed URL, its query string contains authentication information tied to an account with access (e.g. a service account). For example, you could send a URL to someone allowing them to access a document, read it,  with access revoked after one week. 

    • Separate buckets—Audit your buckets and look for access patterns. If you notice that a group of objects all share the same object ACL set, consider moving them into a separate bucket so you can control access at the bucket-level. 

    • IAM conditions—If your app uses shared prefixes in object naming, you could also use IAM Conditions to shard access based on those prefixes.

    • Delegation Tokens—You can use STS Tokens to grant time-limited access to Cloud Storage buckets and shared prefixes. 

    #4 Use HMAC keys for service accounts, not user accounts 
    A hash-based message authentication key (HMAC key) is a type of credential used to create signatures included in requests to Cloud Storage. In general, we suggest using HMAC keys for service accounts rather than user accounts. This helps eliminate the security and privacy implications of relying on accounts held by individual users. It also reduces the risk of service access outages as user accounts could be disabled when a user leaves a project or company.  

    To further improve security, we also recommend: 

    • Regularly changing your keys as part of a key rotation policy.

    • Granting service accounts the minimum access to accomplish a task (i.e. the principle of least privilege). 

    • Setting reasonable expiration times if you’re still using V2 signatures (or migrating to V4 signatures, which automatically enforces a maximum one-week time limit). 

    To learn more about Cloud Storage and more ways to keep your data safe and compliant, check out our access control documentation, and watch our breakout session from Cloud Next ‘20: OnAir.

    Read More

    Migrating data, technology and people to Google Cloud

    Editor’s note: Bukalapak, an ecommerce company based in Jakarta, is one of Indonesia’s largest businesses. As their platform grew to serve over 100 million customers and 12 million merchants, they needed a solution that would reliably and securely scale to handle millions of transactions a day. Here, they discuss their migration to Google Cloud and the value added from its managed services.

    Similar to many other enterprises, Bukalapak’s ecommerce platform did not originate in the cloud. It was initially built leveraging on-premises technologies that worked quite well at the beginning. However, as our business grew—processing over 2 million transactions per day and supporting 100 million customers—it became challenging to keep up with the necessary scale and availability needs. It wasn’t uncommon to see traffic spikes following promotional events, which were frequent. Our infrastructure and overall architecture, however, just wasn’t designed to handle this scale of demand. It was clear we needed a new way to support the success of the business, a way that would allow us to scale to meet fast-growing demand, while providing the best experience to our customers, all without overburdening our team. This led us to implement significant architectural changes, and consider a migration to the cloud.

    Choosing Google Cloud

    Given that this migration would be a large and complex endeavor, we wanted a partner in this journey, not just a vendor. We started by evaluating the product and services portfolio of potential providers, along with their ability to innovate and solve cutting-edge problems. With our very limited experience in the cloud, it was critical to have an experienced professional services team that could effectively guide and support us throughout the migration journey. We also evaluated the overall cost and the availability of data centers in Indonesia that would allow us to comply with government requirements for financial products. Finally, we needed to plan for how we would attract and retain talent, so we looked at the degree of adoption across the providers across Southeast Asia, and specifically Indonesia. After careful consideration across these areas, Google Cloud was the right choice for us.

    Embarking on the cloud migration

    Our on-premises deployment included over 160 relational and NoSQL databases, We also maintained a Kubernetes cluster of over 1,000 nodes and over 30,000 cores, running 550 production microservices and one large monolith application. To address the large amount of technical debt our platform had, we decided against a lift-and-shift approach. Instead, we spent a good deal of time refactoring our services, particularly our monolith application (a.k.a., the mothership), and partitioning our databases. Enhancing our monitoring and alerting, deployment tooling, and testing frameworks were critical to improve the quality of our software, development and release processes, and performance and incident management. We also invested heavily in automation, moving away from manual testing to integration testing, API testing and front-end testing. Adopting the toolings and best practices of DevOps, MLOps and ChatOps increased our engineering velocity and improved the quality of our products and services. 

    For a team that had very limited cloud experience, it was clear early on that this was not just a technology migration. It involved a cultural migration as well, and we wanted to ensure our team could perform the migration while gaining the skill set and experience needed to maintain and develop cloud-based applications. We started by training a smaller team, which took on the task of migrating our first services. Incrementally, we worked on expanding the training, and looping in more and more engineers in the migration efforts. As more engineering teams got involved, we paired them with one of the engineers who joined the migration early on and acted as a coach. This approach allowed us to transfer knowledge and roll out best practices, incrementally but surely, across the entire organization. 

    We took a multi-step approach for the migration. We started by focusing on the cloud foundation work, introducing automation and new technologies like Ansible and Terraform. We also invested heavily in establishing a strong security foundation, onboarding WAF and Anti-DDoS, domain threat detection, network scanning, and image hardening tools, to name a few. From there, we started to migrate the smaller, simpler services and worked our way up to the more complex. That helped the team gain experience over time while managing risk appropriately. In the end, we successfully completed the migration in just 18 months, with very minimal downtime.  

    Managed services for greater peace of mind

    Our team selected Cloud SQL early on as the fully managed service for most of our MySQL and PostgreSQL databases. We appreciated how easy Cloud SQL made it to manage and maintain our databases. With just a few simple API calls, we could quickly set up a new instance or read replica. Auto-failovers and auto-increasing disk size ensured we could run reliably without a heavy operational burden. In addition to Cloud SQL, we’ve now been able to integrate across the other Google Cloud data services, including BigQuery, Data Studio, Pub/Sub, and Dataflow. These services have been instrumental in helping us process, store, and gain insights from a massive amount of data. That in turn allowed us to better understand our customers and consistently find new opportunities to make improvements on their behalf.

    Google Cloud’s managed services give a greater peace of mind. Our team spends less time on maintenance and operations. Instead, we have more time and resources to focus on building products and solving problems related to our core business. Our engineering velocity has increased, and our team has access to Google’s cutting-edge technology, enabling us to solve problems more efficiently. In addition, our platform now has higher uptime, and can scale with ease to keep up with unpredictable and growing demand. We also were able to improve the overall security of our platform and now have a standardized security model that can easily be implemented for new applications. The larger impact has been on what our lean infrastructure team is now able to accomplish. Migrating to Google Cloud gave us the strategic and competitive advantages we were looking for. 

    Both throughout the migration and now that we’re running in production, Google Cloud has been a great partner to us. The Google Cloud team put a lot of effort into understanding what we needed to be successful, and advocating for our needs, often connecting us to product teams or experts from others in the organization. Their desire to go the extra mile on behalf of their customers made our experience positive and ultimately made our cloud migration successful.

    Learn more about Bukalapak and how you can migrate to Google Cloud managed databases.

    Read More

    Compute Engine explained: Scheduling the OS patch management service

    Last year, we introduced the OS patch management service to protect your running Compute Engine VMs against defects and vulnerabilities. The service makes patching Linux and Windows VMs with the latest OS upgrades simple, scalable and effective. In this blog, we share a step-by-step guide on how to set up a project with a schedule to automatically patch filtered VM instances, resolve issues if an agent is not detected, and view an overview of patch compliance across your VM fleet.

    Getting started

    Imagine an example project with several VM instances hosting a mythical web service. You want to automatically keep the instances updated with the latest critical fixes and security updates against malicious software. You have a production fleet and a development fleet of machines for which you want to apply updates using different schedules. 

    First, enable the service by navigating to GCE > OS Patch Management in the Google Cloud Console. Alternatively, you can also enable Cloud OS Config API and Container Analysis API through the Google Cloud Marketplace, or gcloud:

    Note that the OS Config agent is most likely already installed on the VM instances and just needs to be enabled via project metadata keys:

    After the agent collects data across the VM fleet, this data is then displayed on the patch compliance dashboard, which shows the state across all your VMs and operating systems, and displays a bird eye’s view of your patch compliance:

    1 OS patch management.jpg

    You can now see some VM instances that you might like to patch more frequently, for example the CentOS and Red Hat Enterprise Linux (RHEL) fleet. 

    Creating a patch deployment

    You can then click the New Patch Deployment at the top of the screen and walk through the steps to create a patch deployment for the target VMs, each with specific patch configurations and scheduling options.

    In the Target VMs section, you can use VM Instance name prefixes and labels to target only the VM instances with labels that start with a certain prefix. More instance filtering options are available, including zonal and combinations of label groups.

    2 Target VMs.jpg

    In the Patch config option, you can select to patch RHEL, CentOS and Windows with critical and security patches, or specify exact Microsoft Knowledge Base (KB) numbers and packages to install. You can also exclude specific packages from being installed in the ‘Exclude’ fields.

    3 Patch config.jpg

    Finally, you can schedule the patch job. For example, here’s how to run the job every second Tuesday of the month for a maximum duration (three-hour maintenance window), from 11 AM to 2PM:

    4 scheduling.jpg

    After the patch job runs, you can see the result of the installed patches. This information is reported on the compliance dashboard and the VM instances tab:

    5 compliance dashboard.jpg
    6 OS patch management.jpg

    Patch your Compute Engine VMs today

    To learn more about the OS patch management service on Compute Engine including automating patch deployment, visit our documentation page.

    Read More

    Introducing Ruby on Google Cloud Functions

    Cloud Functions, Google Cloud’s Function as a Service (FaaS) offering, is a lightweight compute platform for creating single-purpose, stand-alone functions that respond to events, without having to manage a server or runtime environment. Cloud functions are a great fit for serverless, application, mobile or IoT backends, real-time data processing systems, video, image and sentiment analysis and even things like chatbots, or virtual assistants.

    Today we’re bringing support for Ruby, a popular, general-purpose programming language, to Cloud Functions. With the Functions Framework for Ruby, you can write idiomatic Ruby functions to build business-critical applications and integration layers. And with Cloud Functions for Ruby, now in Preview, you can deploy functions in a fully managed Ruby 2.6 or Ruby 2.7 environment, complete with access to resources in a private VPC network. Ruby functions scale automatically based on your load. You can write HTTP functions to respond to HTTP events, and CloudEvent functions to process events sourced from various cloud and Google Cloud services including Pub/Sub, Cloud Storage and Firestore.

    Functions Framework for Ruby.jpg

    You can develop functions using the Functions Framework for Ruby, an open source functions-as-a-service framework for writing portable Ruby functions. With Functions Framework you develop, test, and run your functions locally, then deploy them to Cloud Functions, or to another Ruby environment.

    Writing Ruby functions

    The Functions Framework for Ruby supports HTTP functions and CloudEvent functions. A HTTP cloud function is very easy to write in idiomatic Ruby. Below, you’ll find a simple HTTP function for Webhook/HTTP use cases.

    CloudEvent functions on the Ruby runtime can also respond to industry standard CNCF CloudEvents. These events can be from various Google Cloud services, such as Pub/Sub, Cloud Storage and Firestore.

    Here is a simple CloudEvent function working with Pub/Sub.

    The Ruby Functions Framework fits comfortably with popular Ruby development processes and tools. In addition to writing functions, you can test functions in isolation using Ruby test frameworks such as Minitest and RSpec, without needing to spin up or mock a web server. Here is a simple RSpec example:

    Try Cloud Functions for Ruby today

    Cloud Functions for Ruby is ready for you to try today. Read the Quickstart guide, learn how to write your first functions, and try it out with a Google Cloud free trial. If you want to dive a little bit deeper into the technical aspects, you can also read our Ruby Functions Framework documentation. If you’re interested in the open-source Functions Framework for Ruby, please don’t hesitate to have a look at the project and potentially even contribute. We’re looking forward to seeing all the Ruby functions you write!

    Read More

    Girl Scouts of Northeast Texas, Waymo team up to transport cookies

    Thumbnail

    Editor’s note: This blog was originally published by Let’s Talk Autonomous Driving on January 12, 2020.
    Girl Scouts and Girl Scout Cookie fans nationwide look forward to the time of year when bright boxes of Thin Mints® and Samoas® roll out for delivery across the country. 
    During this year’s “cookie season,” thousands of Girl Scouts’ signature treats will be transported in south Dallas with the help of Waymo, a company developing autonomous driving technology that could transform how people and things get where they’re going.

    “The Girl Scouts’ Cookie Program has helped girls and young women recognize and pursue their dreams for more than a century, and we’re honored to now be part of that legacy,” said Becky Bucich, chief people officer at Waymo. “We’re delivering today for tomorrow’s leaders, and we’re dedicated to inspiring the next diverse and inclusive generation of engineers, coders, programmers and STEM professionals.”

    Waymo, which has been pioneering autonomous driving technology for more than a decade, announced it would begin testing its Class-8 trucks in Texas in January. The California-based company also operates a fully autonomous ride-hailing service in Metro Phoenix and has tested in more than 25 cities nationwide.

    The collaboration between Girl Scouts of Northeast Texas (GSNETX) and Waymo aligns with the longstanding mission of Girl Scouts to prepare girls to thrive in the world, a vision set by Girl Scouts founder Juliette Gordon Low in 1912. In recent years, that has translated to a commitment to encourage girls to pursue careers in the fields of science, technology, engineering, and math (STEM). Girl Scouts of the USA, the national organization to which GSNETX is a council, has made building the STEM pipeline a priority across the country as our reliance on technology and science grows even more important.

    “We are excited about our partnership with Waymo,” said Jennifer Bartkowski, chief executive officer for Girl Scouts of Northeast Texas. “Girls will experience a practical use for technology that is shaping our future, inspiring them to become the next generation of engineers, coders, and STEM professionals. At the same, the North Texas community will see cutting-edge technology that can improve the world’s access to mobility. It is a win-win as Girl Scouts continues to change the workforce pipeline for North Texas.”

    As part of GSNETX’s virtual “Camp-In Camp Cookie,” which sets girls up for success during cookie season, Xinfeng Le, a product manager for Waymo’s trucking program, presented to the council’s young members about her work at Waymo while also giving girls an inside look at the variety of opportunities in a STEM career.

    GSNETX is also joining the Waymo-led public education initiative, Let’s Talk Autonomous Driving, which supports public dialogue around and understanding of autonomous driving technology. GSNETX is Let’s Talk Autonomous Driving’s first STEM-focused education partner and joins a diverse group of national and state-based organizations that share the belief that autonomous driving could make roads safer and improve mobility and accessibility.

    “We’re fortunate that Girl Scouts share our passion to cultivate a deeper understanding of the world around us, and we’re excited they have joined Let’s Talk Autonomous Driving as our first STEM-focused education partner,” said Bucich.

    Read More

    An open fund for projects debunking vaccine misinformation

    The uncertainty and developing nature of the coronavirus pandemic continues to generate related misinformation. Fact-checkers have been hard at work debunking falsehoods online, with nearly 10,000 fact checks about the pandemic currently showing up across our products. 

    The global rollout of COVID-19 vaccines is exacerbating a perennial problem of misinformation about immunization. To support additional debunking efforts, the Google News Initiative is launching a COVID-19 Vaccine Counter-Misinformation Open Fund worth up to $3 million.

    While the COVID-19 infodemic has been global in nature, misinformation has also been used to target specific populations. Some of the available research also suggests that the audiences coming across misinformation and those seeking fact checks don’t necessarily overlap.

    For this reason, the Open Fund is accepting applications from projects that aim to broaden the audience of fact checks, particularly with those who may be disproportionately affected by misinformation in mind.

    The fund is global and open to news organizations of every size that have a proven track record in fact-checking and debunking activities, or partner with an organization with such recognition. 

    We will prioritize collaborative projects with an interdisciplinary team and clear ways to measure success. For example, eligible applications might include a partnership between an established fact-checking project and a media outlet with deep roots in a specific community, or a collaborative technology platform for journalists and doctors to jointly source misinformation and publish fact checks.

    A global team of Googlers will review applications. The jury that will choose grantees is composed by the following:

    • Theresa Amobi, Senior Lecturer, University of Lagos

    • Ludovic Blecher, Head of Innovation, Google News Initiative

    • Renee DiResta, Technical Research Manager, Stanford Internet Observer

    • Susannah Eliott, CEO, Australian Science Media Centre

    • Gagandeep Kang, Head of the Wellcome Trust Research Laboratory, Christian Medical College

    • Alexios Mantzarlis, News and Information Credibility Lead, Google

    • Syed Nazakat, Founder & CEO, Data Leads

    • Ifeoma Ozoma, Founder and Principal, Earthseed

    • Baybars Örsek, Director, International Fact-Checking Network

    • Andy Pattison, manager of digital solutions, World Health Organization

    • Angela Pimenta, Director of Operations, Projor

    • Amy Pisani, Executive Director, Vaccinate Your Family

    • Yamil Velez, Associate Professor of Political Science, Columbia University

    • Brian Yau, Promotion & Engagement Lead, Vaccine Safety Net at WHO

    The Open Fund builds on support the GNI has provided to news efforts fighting pandemic misinformation in April and December of last year. We expect that selected projects will benefit from research the GNI is supporting into the formats, headlines and sources that are most effective in correcting COVID-19 vaccine misinformation. 

    Finally, we continue to make high quality, authoritative information about vaccines available in our products. We are continuing to expand the number of countries with information panels on authorized vaccines in Google Search, and we continue to surface fact checks across Google by using ClaimReview. We expanded the features in which users come across fact checks in 2020—in the COVID-19 Google News topic in the U.S., on Google News on mobile in Brazil and in Google Images globally.

    Please visit the Open Fund’s website to read more about eligibility criteria and find out how to apply.

    Introducing Google News performance report

    Today we are launching Google News performance reporting to help news publishers better understand user behavior on Google News on our Android and iOS apps, as well as on news.google.com.

    Read More