Google News App
Posted by Erica Hanson, Global Program Manager, Google Developer Student Clubs
(Irene (left) and her DSC team from the Polytechnic University of Cartagena (photo prior to COVID-19)
Irene Ruiz Pozo is a former Google Developer Student Club (DSC) Lead at the Polytechnic University of Cartagena in Murcia, Spain. As one of the founding members, Irene has seen the club grow from just a few student developers at her university to hosting multiple learning events across Spain. Recently, we spoke with Irene to understand more about the unique ways in which her team helped local university students learn more about Google technologies.
Real world ML and AR learning opportunities
Irene mentioned two fascinating projects that she had the chance to work on through her DSC at the Polytechnic University of Cartagena. The first was a learning lab that helped students understand how to use 360º cameras and 3D scanners for machine learning.
(A DSC member giving a demo of a 360º camera to students at the National Museum of Underwater Archeology in Cartagena)
The second was a partnership with the National Museum of Underwater Archeology, where Irene and her team created an augmented reality game that let students explore a digital rendition of the museum’s exhibitions.
(An image from the augmented reality game created for the National Museum of Underwater Archeology)
In the above AR experience created by Irene’s team, users can create their own character and move throughout the museum and explore different virtual renditions of exhibits in a video game-like setting.
Hash Code competition and experiencing the Google work culture
One particularly memorable experience for Irene and her DSC was participating in Google’s annual programming competition, Hash Code. As Irene explained, the event allowed developers to share their skills and connect in small teams of two to four programmers. They would then come together to tackle engineering problems like how to best design the layout of a Google data center, create the perfect video streaming experience on YouTube, or establish the best practices for compiling code at Google scale.
(Students working on the Hash Code competition (photo taken prior to COVID-19)
To Irene, the experience felt like a live look at being a software engineer at Google. The event taught her and her DSC team that while programming skills are important, communication and collaboration skills are what really help solve problems. For Irene, the experience truly bridged the gap between theory and practice.
Expanding knowledge with a podcast for student developers
(Irene’s team working with other student developers (photo taken before COVID-19)
After the event, Irene felt that if a true mentorship network was established among other DSCs in Europe, students would feel more comfortable partnering with one another to talk about common problems they faced. Inspired, she began to build out her mentorship program which included a podcast where student developers could collaborate on projects together.
The podcast, which just released its second episode, also highlights upcoming opportunities for students. In the most recent episode, Irene and friends dive into how to apply for Google Summer of Code Scholarships and talk about other upcoming open source project opportunities. Organizing these types of learning experiences for the community was one of the most fulfilling parts of working as a DSC Lead, according to Irene. She explained that the podcast has been an exciting space that allows her and other students to get more experience presenting ideas to an audience. Through this podcast, Irene has already seen many new DSC members eager to join the conversation and collaborate on new ideas.
As Irene now looks out on her future, she is excited for all the learning and career development that awaits her from the entire Google Developer community. Having graduated from university, Irene is now a Google Developer Groups (GDG) Lead – a program similar to DSC, but created for the professional developer community. In this role, she is excited to learn new skills and make professional connections that will help her start her career.
Are you also a student with a passion for code? Then join a local Google Developer Student Club near you, here.
When my daughter Ruth was born this January, she was a handful. Literally. In the early months of her life, she refused to be put down, fussing and screaming unless we were holding her, walking up and down the hallways of our home. I became a sleep-deprived zombie, shuffling around with one arm free to get some much-needed coffee. And that was on a good day.
I needed all the help I could get. And for me, some of that help came in the form of Google Assistant. Thanks to the Google Nest devices around my house, I was able to get things done a little easier by saying, “Hey Google, turn the temperature down” or “Hey Google, play some soothing music.”
If you’re thinking about the frazzled parents in your life this holiday season, there are a variety of Nest products that might be just right for your gift list. Here are a few suggestions to get you started.
For the parents who are music lovers
Whether your kid blisses out to classic rock or gets hyped up to the umpteenth playing of “Twinkle Twinkle Little Star,” music is key to getting a little bit of peace in the house. The new Nest Audio can play songs via your favorite streaming music subscriptions with a simple voice command. Plus, it features Media EQ that automatically adjusts the volume based on the background noise in your home. Want to play white noise to soothe your crying baby while you get her bottle ready? It’ll be loud enough to hear over all that whining.
For the nursery that absolutely must be perfect
In our house, the temperature can fluctuate depending on what time of day it is. That makes it tough to make sure Ruth is at a comfy temperature for naps and nights. Luckily, the new Nest Thermostat offers Quick Schedule, which lets you set a custom temperature at different times of the day. That way, we can make sure Ruth’s nursery is at the right temperature at night, but our office isn’t stiflingly hot during the day. Plus, the thermostat is simple to use and at an affordable price, which makes it an easy fit for many families.
For the family looking for shows to watch together
With the COVID-19 pandemic making families stay home more than usual, that means it’s extra crucial to find shows everyone agrees on. Thankfully, the new Chromecast with Google TV gives you personalized recommendations based on what you like to watch. And its new remote lets you control your smart home using Google Assistant.
For the couple constantly shouting across the house
We have various Nest devices throughout the house, and we use them to communicate with one another. If I’m feeding Ruth in the nursery downstairs and she’s hungrier than I expect, for example, I say, “Hey Google, broadcast to Kitchen Display: ‘I need another bottle,’” so my husband can bring down a bottle. And when tracking how much she drank, we’d ask the Assistant to convert milliliters to ounces, or just do basic addition and subtraction when we were too sleepy to calculate how much she had to eat. Data-loving parents like me can also use a list to track feeding amounts and nap times via Keep, Docs or other note-taking apps.
For the grandparents who miss their little ones
My daughter was born in the months before the COVID-19 pandemic hit the United States, so we were lucky to have family come by and help out until she was about two months old. But by March, we were quarantined, leaving the grandparents sorely missing their granddaughter. With our Nest Hub Max, we can make hands-free video calls on Google Duo—and when the baby naps, we can quickly decline a few overenthusiastic calls from Grandpa’s Nest Smart Display or smartphone app. And the Hub Max’s camera angle moves with us throughout the room, so we can get chores done (or just keep Ruth happy) while we keep in touch.
These days, Ruth can handle being put down. (Well, at least sometimes.) But I know that Nest will keep being a helping hand as she gets older—and especially when she asks me to play cartoons on repeat.
At Wayfair, we use data to advance our business processes and help our suppliers work more efficiently, all with the end goal of delivering great customer experiences. As one of the world’s largest online destinations for the home, our massive scale allows us to use data to delight our customers and help our thousands of suppliers to identify opportunities and bottlenecks. We had previously worked with Google Cloud for our storefront expansion and relied on them to help us scale our web service that was supporting the buyer experience. As we continue to rapidly grow, this partnership will give us more flexibility to handle surges in customer web traffic and unlock more ways to improve the shopping experience. Being able to help scale operations, while providing a richer experience for our customers, employees, and suppliers, gave us the confidence to continue to work with Google Cloud for our analytics needs.
Improving our customer and supplier experience
With over 18 million products from more than 12,000 suppliers, the process of helping customers find the exact right item for their needs across the vast supplier ecosystem presents exciting challenges, from managing our online catalog and inventory to building a strong logistics network that includes aspects like route optimization and bin packing, while also making it easier to share product data with our suppliers.
At Wayfair, we work hand-in-hand with our suppliers so that we can help them grow their businesses and create offerings that are a win-win for both the supplier and for customers. Thanks to this partnership mindset, our suppliers benefit from a steady stream of recommendations that are informed by data. For example, we might let a supplier know that there is an opportunity to capitalize on demand within a certain category by making some merchandising adjustments, such as creating more robust product descriptions. We might also work with a supplier to identify ways to incorporate product tags that allow us to surface a more personalized offering for customers whose aesthetic preferences lean toward a certain style. We are in constant dialogue with our supplier partners, sharing insights like “We know there’s a growing demand for this category and you could surface your products better if you made these adjustments to your merchandising decisions,” or working with them on questions such as, “If we have tens of thousands of sofas, how do we offer personalized recommendations to our end buyers?” Obviously, providing this level of analysis at scale requires a platform that is able to process massive amounts of data across multiple systems.
Why we chose Google Cloud
We chose Google Cloud because we knew they could scale to meet our needs. Google Cloud helped us effectively centralize our data on a platform with low operational overhead, enabling our data analysts and data scientists to run business-critical analytics. With Google Cloud, we were able to move our application datastores, data movement, and analytics and data science tools all into one place, which gave our developers and analysts the ability to store, secure, enrich, and present data that our teams could take action on.
Google Cloud’s flexibility and embracing of open-source solutions in products like Dataproc and Composer was proof to us that they are investing in a platform without too much proprietary technology, which made it easier for our teams to adopt and use those tools. The team also liked how easy it was to move data in from different sources into Google Cloud. Plus, Google Cloud’s consistent data access model improved data governance for Wayfair. The standardization on Cloud Identity and Access Management (Cloud IAM) controls makes sure that our data is accessible to the right people and always secure.
Google Cloud’s fully managed platform has well-defined services, which made it easy for us to use and adopt products across the portfolio. For example, the Cloud DLP API can be composed together with other Google Cloud tools such as BigQuery and Pub/Sub to build integrated applications for data security, and the BigQuery Storage API and managed metastore offerings enable smooth integration of open source products with Google’s data platform offerings.
How we modernized our data stack
We needed a way to get our streaming and batch data available quickly for insights. In our previous environment, we maintained data warehouse systems that required multiple copies of data to scale and required complex data synchronization routines. This had resulted in long lead times for our team.
Now, we can ingest event data from Pub/Sub and Dataflow as the data pipeline for real-time insights and centralize our data using Dataproc, Cloud Storage, and BigQuery storage to help overcome data silos, and derive actionable insights. Because BigQuery decouples compute and storage, we’re able to operate with more agility. Unstructured data lives in Dataproc while structured data lives in BigQuery. Our Dataproc instance is used as a single managed cluster with autoscaling for Hive, Presto, and Spark jobs that read data from BigQuery and Cloud Storage-based tables. We visualize our data in Looker to develop curated dashboards to offer a high-level summary with the ability to drill into diagnostics on what’s driving a particular business metric. We also use Data Studio for operational reporting, which is straightforward to spin up on BigQuery.
By analyzing data from our operational SQL stores data as our applications in BigQuery, we were able to improve our inventory and demand forecasting to help our suppliers make better decisions and generate more revenue, faster. Using BigQuery’s flat-rate pricing option, we were able to ensure price predictability for our business.
Enjoying the results of a cloud data platform
At Wayfair, we have always believed in the value of data and recognize the importance of maintaining volume, velocity, and agility as we continue to grow. Google Cloud’s powerful and accessible infrastructure has let our data teams reallocate their time and effort from moving and managing the data to instead innovating on what’s next.
BigQuery and Dataproc give us high-performance, low-maintenance access to our data at scale. Google Cloud’s analytics product offerings support the full set of requirements of our internal and external users—from descriptive analytics to proscriptive alerting and ML—in a platform that effectively blends Google’s internal technology and open-source standards and technologies.
In addition to enjoying the scalability and power these tools bring, we also value the performance. In production, we are seeing a greater than 90% reduction in the number of analytic queries that take more than one minute to run. The combination of scale and speed is generating impressive adoption.
Less than a year into our transition, the migration has had tangible benefits—users on cloud tooling report 30% higher NPS with the platform offerings over existing alternatives with significantly lower investment in support. We get more business done with less effort and more satisfied users with Google Cloud.
We’re looking forward to our continued work with Google Cloud in improving our overall customer and supplier experience.
Want to learn more about Wayfair?
Check out all the exciting things happening at Wayfair engineering on our blog, and if you want to work on these kinds of challenges with a talented, global team, check out our Engineering and ML roles.
Quick launch summary
- Live streaming viewership data. Attendance reports will now include data on live streaming viewers in a separate tab. Live stream data will include total viewer count and viewers over the course of the live stream.
- Admin control over attendance tracking. Admins can choose whether an organizational unit can use the attendance tracking feature or not.
- New settings to control report creation. Meeting organizers who are not in Education domains can choose whether a report is created for the meeting via in-meeting settings or from the Calendar event before the meeting starts. Meeting organizers at Education domains will continue to automatically receive attendance reports for meetings with five or more participants.
- In-call viewer count for live stream events. Live stream hosts and meeting attendees (not live stream viewers) will now see a count of the total number of live stream participants when joining via Meet from a desktop or computer.
- Attendance reports for more customers. We are making attendance reports available to Google Workspace Essentials, Business Plus, Enterprise Essentials, Enterprise Standard, and Enterprise Plus customers. Previously, they were only available to Enterprise for Education customers.
- Attendance and live stream reports will be available by default for your users, but you can make them unavailable to your domain or specific OUs and groups. Visit the Help Center to learn more about managing attendance reports for your organization.
- In-call viewer count for live streams will be ON by default and can’t be turned off. Visit our Help Center to learn more about live streaming a video meeting.
- Users in Enterprise for Education domains: Meeting organizers will continue to receive reports automatically for meetings with five or more participants.
- Users in other domains: Attendance reports will be OFF by default, but meeting organizers can turn them ON for any meeting using the meeting settings during the call or the calendar event before the call. Visit the Help Center to learn more about how to turn attendance reports on or off in Google Meet.
- All users: In-call viewer count for live streams will be ON by default and can’t be turned off. Visit our Help Center to learn more about live streaming a video meeting.
- Rapid and Scheduled Release domains: Full rollout (1–3 days for feature visibility) starting on November 24, 2020
- Rapid and Scheduled Release domains: Full rollout (1–3 days for feature visibility)
- Rapid and Scheduled Release domains: Full rollout (1–3 days for feature visibility) starting on November 24, 2020
- Available to Google Workspace Essentials, Business Plus, Enterprise Essentials, Enterprise Standard and Enterprise Plus, as well as G Suite Enterprise for Education customers
- Not available to Google Workspace Business Starter, Business Standard, as well as G Suite Basic, Business, Education, and Nonprofits customers
- This feature was listed as an upcoming release.
This year presented many unforeseen and unthinkable global challenges. From shifting to remote work and providing essential services to affected communities to working tirelessly to keep the economy afloat. Many of us often wished there was a magic formula to help smooth these transitions.
And while a magic formula might not exist, many governments across Europe, the Middle East, and Africa (EMEA) followed a common set of behaviours that helped the public sector diffuse digital technology within their organisations. These five principles included:
Prioritising data security, privacy, and sovereignty
Creating exciting services and experiences with artificial intelligence (AI)
Identifying and navigating the pitfalls of application modernisation
Viewing sustainability as a net-positive
Putting people at the centre of cloud adoption.
At theGoogle Cloud Public Sector Summit on December 9, in our EMEA keynote and the sessions that follow, we’ll learn from the organisations and governments who applied these principles to navigate these difficult times, improve operational efficiency, and deliver digital solutions that serve and excite employees, customers, and citizens. Leaders from across the Public Sector will share insights and answer questions in real-time on how Google Cloud has helped them accelerate innovation when it mattered the most, to deliver outstanding public services that are secure, responsive, and efficient.
Our EMEA-focused sessions will show you how to:
Chart a course to digital sovereignty – delivering greater autonomy in data, operational, and software sovereignty
Balance security and access policies – providing open and secure cloud platforms with interoperability and innovation
Innovate whilst also driving operational efficiency? – sharing real-world experiences and insights with this panel discussion
Improve the citizen experience – using sentiment analysis and AI and machine learning (ML) to derive intelligence from data
Fast track infrastructure modernisation – highlights, practical tips and real-world strategies for modernising with Apigee
Ask the Expert: Need help with something specific?Sign up to chat with a Google Cloud expert on one of these topics: Smart analytics, security, infrastructure modernisation, application modernisation, productivity and collaboration AI, or digital transformation. Spaces will fill up, so don’t delay.
Engage with Partners:Learn how our partner ecosystem can help to bolster your Google Cloud solution.
The Google Cloud Public Sector Summit is a free online event. Join us on Dec. 8-9 –register today. If you have any questions about the event, please reach out to us at email@example.com.
Posted by Itay Inbar, Senior Software Engineer, Google Research
Last year we launched Recorder, a new kind of recording app that made audio recording smarter and more useful by leveraging on-device machine learning (ML) to transcribe the recording, highlight audio events, and suggest appropriate tags for titles. Recorder makes editing, sharing and searching through transcripts easier. Yet because Recorder can transcribe very long recordings (up to 18 hours!), it can still be difficult for users to find specific sections, necessitating a new solution to quickly navigate such long transcripts.
To increase the navigability of content, we introduce Smart Scrolling, a new ML-based feature in Recorder that automatically marks important sections in the transcript, chooses the most representative keywords from each section, and then surfaces those keywords on the vertical scrollbar, like chapter headings. The user can then scroll through the keywords or tap on them to quickly navigate to the sections of interest. The models used are lightweight enough to be executed on-device without the need to upload the transcript, thus preserving user privacy.
|Smart Scrolling feature UX|
Under the hood
The Smart Scrolling feature is composed of two distinct tasks. The first extracts representative keywords from each section and the second picks which sections in the text are the most informative and unique.
For each task, we utilize two different natural language processing (NLP) approaches: a distilled bidirectional transformer (BERT) model pre-trained on data sourced from a Wikipedia dataset, alongside a modified extractive term frequency–inverse document frequency (TF-IDF) model. By using the bidirectional transformer and the TF-IDF-based models in parallel for both the keyword extraction and important section identification tasks, alongside aggregation heuristics, we were able to harness the advantages of each approach and mitigate their respective drawbacks (more on this in the next section).
The bidirectional transformer is a neural network architecture that employs a self-attention mechanism to achieve context-aware processing of the input text in a non-sequential fashion. This enables parallel processing of the input text to identify contextual clues both before and after a given position in the transcript.
|Bidirectional Transformer-based model architecture|
The extractive TF-IDF approach rates terms based on their frequency in the text compared to their inverse frequency in the trained dataset, and enables the finding of unique representative terms in the text.
Both models were trained on publicly available conversational datasets that were labeled and evaluated by independent raters. The conversational datasets were from the same domains as the expected product use cases, focusing on meetings, lectures, and interviews, thus ensuring the same word frequency distribution (Zipf’s law).
Extracting Representative Keywords
The TF-IDF-based model detects informative keywords by giving each word a score, which corresponds to how representative this keyword is within the text. The model does so, much like a standard TF-IDF model, by utilizing the ratio of the number of occurrences of a given word in the text compared to the whole of the conversational data set, but it also takes into account the specificity of the term, i.e., how broad or specific it is. Furthermore, the model then aggregates these features into a score using a pre-trained function curve. In parallel, the bidirectional transformer model, which was fine tuned on the task of extracting keywords, provides a deep semantic understanding of the text, enabling it to extract precise context-aware keywords.
The TF-IDF approach is conservative in the sense that it is prone to finding uncommon keywords in the text (high bias), while the drawback for the bidirectional transformer model is the high variance of the possible keywords that can be extracted. But when used together, these two models complement each other, forming a balanced bias-variance tradeoff.
Once the keyword scores are retrieved from both models, we normalize and combine them by utilizing NLP heuristics (e.g., the weighted average), removing duplicates across sections, and eliminating stop words and verbs. The output of this process is an ordered list of suggested keywords for each of the sections.
Rating A Section’s Importance
The next task is to determine which sections should be highlighted as informative and unique. To solve this task, we again combine the two models mentioned above, which yield two distinct importance scores for each of the sections. We compute the first score by taking the TF-IDF scores of all the keywords in the section and weighting them by their respective number of appearances in the section, followed by a summation of these individual keyword scores. We compute the second score by running the section text through the bidirectional transformer model, which was also trained on the sections rating task. The scores from both models are normalized and then combined to yield the section score.
|Smart Scrolling pipeline architecture|
A significant challenge in the development of Smart Scrolling was how to identify whether a section or keyword is important – what is of great importance to one person can be of less importance to another. The key was to highlight sections only when it is possible to extract helpful keywords from them.
To do this, we configured the solution to select the top scored sections that also have highly rated keywords, with the number of sections highlighted proportional to the length of the recording. In the context of the Smart Scrolling features, a keyword was more highly rated if it better represented the unique information of the section.
To train the model to understand this criteria, we needed to prepare a labeled training dataset tailored to this task. In collaboration with a team of skilled raters, we applied this labeling objective to a small batch of examples to establish an initial dataset in order to evaluate the quality of the labels and instruct the raters in cases where there were deviations from what was intended. Once the labeling process was complete we reviewed the labeled data manually and made corrections to the labels as necessary to align them with our definition of importance.
Using this limited labeled dataset, we ran automated model evaluations to establish initial metrics on model quality, which were used as a less-accurate proxy to the model quality, enabling us to quickly assess the model performance and apply changes in the architecture and heuristics. Once the solution metrics were satisfactory, we utilized a more accurate manual evaluation process over a closed set of carefully chosen examples that represented expected Recorder use cases. Using these examples, we tweaked the model heuristics parameters to reach the desired level of performance using a reliable model quality evaluation.
After the initial release of Recorder, we conducted a series of user studies to learn how to improve the usability and performance of the Smart Scrolling feature. We found that many users expect the navigational keywords and highlighted sections to be available as soon as the recording is finished. Because the computation pipeline described above can take a considerable amount of time to compute on long recordings, we devised a partial processing solution that amortizes this computation over the whole duration of the recording. During recording, each section is processed as soon as it is captured, and then the intermediate results are stored in memory. When the recording is done, Recorder aggregates the intermediate results.
When running on a Pixel 5, this approach reduced the average processing time of an hour long recording (~9K words) from 1 minute 40 seconds to only 9 seconds, while outputting the same results.
The goal of Recorder is to improve users’ ability to access their recorded content and navigate it with ease. We have already made substantial progress in this direction with the existing ML features that automatically suggest title words for recordings and enable users to search recordings for sounds and text. Smart Scrolling provides additional text navigation abilities that will further improve the utility of Recorder, enabling users to rapidly surface sections of interest, even for long recordings.
Bin Zhang, Sherry Lin, Isaac Blankensmith, Henry Liu, Vincent Peng, Guilherme Santos, Tiago Camolesi, Yitong Lin, James Lemieux, Thomas Hall, Kelly Tsai, Benny Schlesinger, Dror Ayalon, Amit Pitaru, Kelsie Van Deman, Console Chen, Allen Su, Cecile Basnage, Chorong Johnston, Shenaz Zack, Mike Tsao, Brian Chen, Abhinav Rastogi, Tracy Wu, Yvonne Yang.
Earlier this year, we announced Cloud Load Balancer support for Cloud Run. You might wonder, aren’t Cloud Run services already load-balanced? Yes, each *.run.app endpoint load balances traffic between an autoscaling set of containers. However, with the Cloud Balancing integration for serverless platforms, you can now fine tune lower levels of your networking stack. In this article, we will explain the use cases for this type of set up and build an HTTPS load balancer from ground up for Cloud Run using Terraform.
Why use a Load Balancer for Cloud Run?
Every Cloud Run service comes with a load-balanced *.run.app endpoint that’s secured with HTTPS. Furthermore, Cloud Run also lets you map your custom domains to your services. However, if you want to customize other details about how your load balancing works, you need to provision a Cloud HTTP load balancer yourself.
Here are a few reasons to run your Cloud Run service behind a Cloud Load Balancer:
- Serving static assets with CDN since Cloud CDN integrates with Cloud Load Balancing
- Serving traffic from multiple regions since Cloud Run is a regional service but you can provision a load balancer with a global anycast IP and route users to the closest available region.
- Serve content from mixed backends, for example your /static path can be served from a storage bucket, /api can go to a Kubernetes cluster.
- Bring your own TLS certificates, such as wildcard certificates you might have purchased.
- Customize networking settings, such as TLS versions and ciphers supported.
- Authenticating and enforcing authorization for specific users or groups with Cloud IAP (this does not work yet with Cloud Run, however, stay tuned)
- Configure WAF or DDoS protection with Cloud Armor.
The list goes on, Cloud HTTP Load Balancing has quite a lot of features.
Why use Terraform for this?
The short answer is that a Cloud HTTP Load Balancer consists of many networking resources that you need to create and connect to each other. There’s no single “load balancer” object in GCP APIs.
To understand the upcoming task, let’s take a look at the resources involved:
- global IP address for your load balancer
- Google-managed SSL certificate (or bring your own)
- forwarding rules to associate IP address with backends
- target HTTPS proxy to terminate your HTTPS traffic
- target HTTP proxy to receive HTTP traffic and redirect to HTTPS
- URL maps to specify routing rules for URL path patterns.
- backend service to keep track of eligible backends
- network endpoint group allowing you to register serverless apps as backends.
As you might imagine, it is very tedious to provision and connect these resources just to achieve a simple task like enabling CDN.
You could write a bash script with the gcloud command-line tool to create these resources; however, it will be cumbersome to check corner cases like if a resource already exists, or modified manually later. You would also need to write a cleanup script to delete what you provisioned.
This is where Terraform shines. It lets you declaratively configure cloud resources and create/destroy your stack in different GCP projects efficiently with just a few commands.
Building a load balancer: The hard way
The goal of this article is to intentionally show you the hard way for each resource involved in creating a load balancer using Terraform configuration language.
We’ll start with a few Terraform variables:
- var.name: used for naming the load balancer resources
- var.project: GCP project ID
- var.region: region to deploy the Cloud Run service
- var.domain: a domain name for your managed SSL certificate
First, let’s define our Terraform providers:
Then, let’s deploy a new Cloud Run service named “hello” with the sample image, and allow unauthenticated access to it:
If you manage your Cloud Run deployments outside Terraform, that’s perfectly fine: You can still import the equivalent data source to reference that service in your configuration file.
Next, we’ll reserve a global IPv4 address for our global load balancer:
Next, let’s create a managed SSL certificate that’s issued and renewed by Google for you:
If you want to bring your own SSL certificates, you can create your own google_compute_ssl_certificate resource instead.
Then, make a network endpoint group (NEG) out of your serverless service:
Now, let’s create a backend service that’ll keep track of these network endpoints:
If you want to configure load balancing features such as CDN, Cloud Armor or custom headers, the google_compute_backend_service resource is the right place.
Then, create an empty URL map that doesn’t have any routing rules and sends the traffic to this backend service we created earlier:
Next, configure an HTTPS proxy to terminate the traffic with the Google-managed certificate and route it to the URL map:
Finally, configure a global forwarding rule to route the HTTPS traffic on the IP address to the target HTTPS proxy:
After writing this module, create an output variable that lists your IP address:
When you apply these resources and set your domain’s DNS records to point to this IP address, a huge machinery starts rolling its wheels.
Soon, Google Cloud will verify your domain name ownership and start to issue a managed TLS certificate for your domain. After the certificate is issued, the load balancer configuration will propagate to all of Google’s edge locations around the globe. This might take a while, but once it starts working.
Astute readers will notice that so far this setup cannot handle the unencrypted HTTP traffic. Therefore, any requests that come over port 80 are dropped, which is not great for usability. To mitigate this, you need to create a new set of URL map, target HTTP proxy, and a forwarding rule with these:
As we are nearing 150 lines of Terraform configuration, you probably have realized by now, this is indeed the hard way to get a load balancer for your serverless applications.
If you like to try out this example, feel free to obtain a copy of this Terraform configuration file from this gist and adopt it for your needs.
Building a load balancer: The easy way
To address the complexity in this experience, we have been designing a new Terraform module specifically to skip the hard parts of deploying serverless applications behind a Cloud HTTPS Load Balancer.
Stay tuned for the next article where we take a closer look at this new Terraform module and show you how easier this can get.
Quick launch summary
- Allows you to edit, comment, and collaborate on Microsoft Office files using Google Docs’, Sheets’, and Slides’ powerful real-time collaboration tools.
- Improves sharing options, improves sharing controls, and reduces the need to download and email file attachments.
- Streamlines workflows by reducing the need to convert file types.
- Admins: Use our Help Center to learn more about managing Office editing for your organization.
- End users: This feature will be on by default. Use our Help Center to learn more about working with Office files using Office editing. To use the feature, make sure your Google Drive app is up to date.
- This feature is available now for all users.
- Available to Google Workspace Essentials, Business Starter, Business Standard, Business Plus, Enterprise Essentials, Enterprise Standard, and Enterprise Plus as well as G Suite Basic, Business, Education, Enterprise for Education, and Nonprofits customers
- Available to users with personal Google Accounts
- Google Workspace Admin Help: Set up Office editing
- Google Help: Work with Office files using Office editing
- Workspace Updates Blog: Office editing on Android brings Google Workspace collaboration to Microsoft Office files
- Workspace Updates blog: Office editing makes it easier to work with Office files in Docs, Sheets, and Slides
On June 15, 2020 we announced the general availability of the Google Maps Platform Gaming solution, including the Maps SDK for Unity and Playable Locations API, to officially bring our knowledge of the world to all games built with Unity for the first time.
The goal of the Gaming solution is to enable developers to build real-world and location-based games that create immersive experiences for players. In some games, such as The Walking Dead: Our World by Next Games, this meant creating gameplay that takes players out into the real world. In others, such as the recently released PAC-MAN GEO™ by BANDAI NAMCO, Entertainment Inc., this meant bringing the real world into a familiar game that players already know and love.
Let’s take a look at a few recently released games, and how game studios are building the next generation of location-based games with the Google Maps Platform Gaming solution.
PAC-MAN GEO (BANDAI NAMCO Entertainment, Inc.)
First released on May 22, 1980, PAC-MAN™ has been a worldwide hit ever since. In October 2020, BANDAI NAMCO Entertainment Inc. released PAC-MAN GEO, which allows players to set PAC-MAN, Blinky, Pinky, Inky, and Clyde loose on the streets of 170 countries around the world–just in time for the 40th anniversary of PAC-MAN.
“For us, there was always the question of what it would be like to play PAC-MAN in a real-world setting. Today, with gaming technology evolving from arcades, to consoles, to mobile platforms, and to VR/AR, we are always exploring new ways of playing PAC-MAN,” said Yuji Yoshii, producer at BANDAI NAMCO.
TSUBASA+ (MIRAIRE, Inc.)
TSUBASA+ is the latest game from MIRAIRE, Inc, based on the popular Captain Tsubasa manga and anime series that first debuted in 1981. In the game, players get to play as their favorite soccer superstars from the beloved, long-running series.
“With characters from Japan’s world-renowned Captain Tsubasa series merging the virtual and real worlds, we believe that TSUBASA+ will become an everyday app for football fans,” said Takuya Aoki from MIRAIRE. “A key feature of a real-world game is that the actual world is viewed as a map field in the game. We thought that this feature and football, which has strong regional flavors, would go well together.”
Jackpot Planet: City Adventures (WGAMES)
At WGAMES, the team used the Google Maps Platform gaming solution to create over 20 replicas of the most famous cities in the world for their new game Jackpot Planet: City Adventures. Players who join Jackpot Planet begin their journey in their actual or a nearby city where they explore real-world sites and monuments, participate in solving hidden object games, and WGAMES-made free slot machines.
“Our long-term goal is to interconnect all cities using Google Maps Platform and create a completely independent world where players can visit different places in real life, while fully remaining in the Google Maps Platform-enabled world and travel city-to-city using virtual planes, trains, etc,” said Daniel Kajouie, President and CEO of WGAMES. “The first step toward that goal was to build a unique platform that now allows them to rapidly develop new cities and levels on an ongoing basis, which they accomplished in 18 months using the tools provided by the Google Maps Platform Gaming solution.”
Get it from Google Play.
Millionaire Tycoon: World (SavySoda)
Millionaire Tycoon: World uses the Google Maps Platform gaming solution to create gameplay that teaches about real estate investment and money management. Players purchase and renovate real-world properties and they can cooperate with fellow players to build businesses or battle it out. You won’t need millions of dollars in your bank account to start.
When asked how the idea for this newest addition to the Millionaire Tycoon series came about, CEO of SavySoda Xin Zhao said, “Over 10 years ago we created the original Millionaire Tycoon. We felt strongly we could create a fun game that also educates players about the importance of investment and money management. Once we discovered the Google Maps Platform Gaming solution, we knew there was an opportunity to take our concepts from Millionaire Tycoon Classic to the next level. With access to real-world locations and content, we can create a truly immersive investment simulation game that’s not only fun, but social and educational as well.”
Our team is excited and inspired by the many creative ways game developers all over the world are already using Google Maps Platform, and we can’t wait to see what comes next. To learn more about Google Maps Platform Gaming solution, check out our Getting Started video and documentation.
For more information on Google Maps Platform, visit our website.
PAC-MAN GEO™ & ©2020 BANDAI NAMCO Entertainment Inc.