From the kitchen to the Maison to low orbit: 3 surprising Google Cloud stories from this week

A new era of cloud computing is upon us, and organizations of all shapes and sizes are building their transformation clouds to create what’s next. You can see this manifesting everywhere—including in a few places you might not immediately expect. 

Back in December, we published a story about using Google Cloud AI to create baking recipes. This resulted in Mars, Inc. approaching us for a Maltesers + AI kitchen collaboration featuring our very own Sara Robinson, co-author of our original blog post. Maltesers are a popular British candy made by Mars that have an airy malted milk center with a delicious chocolate coating. We saw this opportunity as a way to partner with a storied and innovative company like Mars and also a chance to showcase the magic that can happen when AI and humans work together. Find out what happened, and even try the recipe.

Maltesers AI Cakes.jpg

Maison Cartier is globally renowned for the timeless design of its jewelry and watchmaking creations. And while it prides itself on the vastness of its collection, manually browsing its catalog to find specific models, or comparing several models at once, could sometimes take quite some time for a sales associate at one of the Maison’s 265 boutiques. This was not ideal for a brand that is known for its swift and efficient client service. Thus, in 2020, Cartier turned to Google Cloud and its advanced AI and machine learning capabilities for a solution. This week, we shared how we helped the Maison create an app that enables boutique teams to take a picture of a desired watch model (or use any existing photo of it as reference) and quickly find its equivalent product page online. Learn how they did it.

Cartier

Satellites play a critical role in our daily lives, whether its studying the Earth’s weather and environment, helping people and things communicate in remote locations, or monitoring critical infrastructure. Many new satellites are launched into low Earth orbit (LEO), which requires a worldwide network of antennas to operate. Ground Station-as-a-Service (GSaaS) companies such as Leaf Space give satellite operators the ability to lease time on a ground network, and when a satellite is within its field of view, use Leaf Space’s antennas and other equipment to communicate between the satellite and ground. Leaf Space built their GSaaS solution on Google Cloud, and you can learn the nitty gritty on how they did it in this blog post.

A YouTube video depicting the Leaf Space control center.

Leaf Space Control Center

That’s a wrap for this week. Stay tuned for more transformation cloud stories in the weeks ahead.

Related Article

This bears repeating: Introducing the Echo subsea cable

The new Echo subsea cable will run from the U.S. to Singapore with a stop-over in Guam, and with plans to land in Indonesia.

Read Article

Read More

Lighter lift-and-shifts with the new Database Migration Service

“Database migrations are super fun!” – No one ever.

promoted

There can be considerable friction in moving databases from platform to platform. If you’re doing a lift-and-shift to Google Cloud, your ability to take advantage of cloud features slows down when you have to handle all the intricacies of:

  1. Devising a migration strategy for safely and efficiently moving data (while managing downtime)
  2. Assessing the impact of the migration
  3. Database preparation
  4. Secure connectivity setup
  5. Data replication
  6. And migration validation

Beyond that, there might be manual work to rewrite database code for a new engine or rewrite applications, and deep validation of aspects like performance impact.

It goes without saying that migration to the cloud is a complex process with many moving parts and personas involved, including a network administrator to account for infrastructure/security requirements like VPN. Most DBAs know that one of the largest risks of migrating a database is downtime, which often prevents companies from taking on the heavy task. Typically you shutdown the application, create a backup of the current database schema, perform all required update operations using migration tools, restart the application, and hope that everything works fine. 

But that changes if you can’t accept any downtime. PostgreSQL users, for example, often have very large databases, meaning they are facing hours of downtime, which for most companies isn’t realistic. 

Migration tools as a fast track

A number of tools are available to help you move data from one type of database to another or to move data to another destination like a data warehouse or data lake. Moving critical datasets — entire databases — requires the latest-generation cloud-based migration tools that can handle data replication with ease, while providing enhanced security. While we’ve seen cloud-based migration tools, like Alooma, Matillion, and Snaplogic, we also know cloud migration tools need to integrate well with both the source and the target systems, enabling you to migrate databases with minimal effort, downtime, and overhead. 

In 2019 Alooma joined Google Cloud, bringing Alooma one step closer to delivering a full self-service database migration experience bolstered by Google Cloud technology. Alooma helps enterprise companies streamline database migration in the cloud with a data pipeline tool that simplifies moving data from multiple sources to a single data warehouse. The addition of Alooma and their expertise in enterprise and open source databases has been critical to bringing additional migration capabilities to Google Cloud. Then, in November 2020, Google Cloud launched the new, serverless Database Migration Service (DMS) as part of our vision for meeting these modern needs in a user-friendly, fast, and reliable way for migration to Cloud SQL. While Alooma is an ETL platform for data engineers to build a flexible streaming data pipeline to a cloud data warehouse for analytics, DMS is a database migration service for DBAs and IT professionals to migrate their databases to the cloud as part of their larger migration goals.

Database Migration Service is now GA

Database Migration Service makes it easy for you to migrate your data to Google Cloud. It’s a fully managed service that helps you lift and shift your MySQL and PostgreSQL workloads into Cloud SQL. You can migrate from on-premises, self-hosted in Google Cloud, or from another cloud, and get a direct path to unlocking the benefits of Cloud SQL. The focus of DMS is to manage the migration of your database schema, metadata, and the data itself. It streamlines the networking workflow, manages the initial snapshot and ongoing replication, provides a status of the migration operation, and supports one-time and continuous migrations. This lets you cut over with minimal downtime. 

That’s a lot to absorb, but here are three main things I want you to take away: 

  1. With DMS, you get a simple, integrated experience to guide you through every step of the migration (not just a combination of tools to perform the assessment and migration).
  2. It’s serverless. You don’t need to deploy, manage, or monitor instances that run the migration. The onus of deciding on appropriate sizing, monitoring the instance, ensuring that compute / storage are sufficient is on Google Cloud. Serverless migrations eliminate surprises and are performant at scale.
  3. It’s free.

Supported source databases include:

  • AWS RDS 5.6, 5.7, 8.0
  • Self-managed (on-prem, on any cloud VM) 5.5, 5.6, 5.7, 8.0
  • Cloud SQL 5.6, 5.7, 8.0

Supported destination databases include:

  • Cloud SQL for MySQL 5.6, 5.7, 8.0

Gabe Weiss, Developer Advocate for Cloud SQL, has gone in-depth around the various migration scenarios, how DMS works, and how to prepare Postgres instances for migration, so check out his content, as well as best practices around homogeneous migrations. For now, I’ll give you a quick sneak peak by walking you through the basic DMS experience for a new migration job.

A stroll through DMS

You can access DMS from the Google Cloud console under Databases. To get started, you’ll create a migration job. This represents the end-to-end process of moving your source database to a Cloud SQL destination. 

Define your migration job

First, let’s define what kind of migration the job will run. I want to move a MySQL database I am running on Google Compute Engine to Cloud SQL. I can choose between one-time or a continuous replication. For minimal downtime, I select continuous. Once I define my job, DMS shows me the source and connectivity configuration required to be able to connect to and migrate the source database.

DMS

DMS clearly explains what you need to do by showing you the prerequisites directly in the UI. 

before you continue

*Don’t skip this step! Be sure to review these prerequisites because you’re going to save yourself a headache during connectivity testing. 

Define your source

First you have to create a connection profile, a resource that represents the information needed to connect to a database. These profiles aren’t locked to an individual migration. This means you can reuse it if you want to first test out a migration or if someone else in your organization is in charge of connecting to the database. 

  • If you’re replicating from a self-hosted database, enter the Hostname or IP address (domain or IP) and Port to access the host. 
  • If you’re replicating from a Cloud SQL database select the Cloud SQL instance from the dropdown list.
list

Create a destination

The next step is to create the Cloud SQL instance to migrate the database to. This will feel familiar to anyone who has created a Cloud SQL instance before. You’ll see many of the same options, like connectivity and machine configuration. Since DMS relies on the replication technology of the Database Engine, you don’t have to create resources, aside from the Cloud SQL instance.

machine type

Connect your source to your target 

To me this is where the magic is because establishing connectivity is often viewed as pretty hard. Depending on the type of your source database and its location, you can choose among four types of connectivity methods:

  • IP allowlists – Use this when your source database is external to Google Cloud.
  • Reverse SSH tunnel – Use this to set up private connectivity through a reverse SSH tunnel on a VM in your project.
  • VPCs through VPNs – Use this if your source database is inside a VPN (i.e. in AWS or your on-premises VPN).
  • VPC peering – Use this if your source database is in the same VPC on Google Cloud.
create

Since my source VM is in Google Cloud, I set up VPC Peering. Just be sure to enable the Service Networking API to do this, which provides automatic management of network configurations necessary for certain services.

Validate the migration job

And that’s it. I configured my source, created a Cloud SQL destination, and established connectivity between them. All that’s left is to validate the migration job setup and start my migration. Once it’s validated, I can trust that my migration will run smoothly.

test
testing

You can start the job and run it immediately. Once the migration job has started, you can monitor its progress and see if it encounters any issues. DMS will first transfer the initial snapshot of existing data, then continuously replicate changes as they happen. 

When the initial snapshot is migrated and continuous replication is keeping up, you can promote the Cloud SQL instance to be your primary instance and point your applications to work directly against it.

promote

DMS guides you through the process and manages the connection between your source and Cloud SQL instance with flexible options. You can test the migration and get a reliable way to migrate with minimal downtime to a managed Cloud SQL instance. It’s serverless, highly performant, and free to use. If you want to give it a spin, check out the quickstart and migrations solutions guide, and let me know your thoughts and any feedback. 

Looking for SQL Server migrations? You can request access to participate in the SQL Server preview.

Find me on Twitter: @stephr_wong

Read More

How to use PyTorch Lightning’s built-in TPU support

The Lightning framework is a great companion to PyTorch. The lightweight wrapper can help organize your PyTorch code into modules, and it provides useful functions for common tasks. For an overview of Lightning and how to use it on Google Cloud Platform, this blog post can get you started.

One really nice feature of Lightning is being able to train on any hardware without changing your core model code. An Accelerator API is provided with built-in support for CPU, GPU, TPU, Horovod, and more. You can even extend the API to support your own hardware.

In this blog post, we’ll see how easy it is to start training models with Lightning on TPUs. TPUs, or Tensor Processing Units, are specialized ML processors that can dramatically accelerate the time to train your model. If you’re new to TPUs, the blog post What makes TPUs fine-tuned for deep learning? is a gentle introduction to TPU architecture and benefits.

Lightning on TPUs.gif

Google Cloud’s GA support for PyTorch / XLA is the bridge between PyTorch and the TPU hardware. XLA, or Accelerated Linear Algebra, compiles high level operations from your model into operations optimized for speed and memory usage on the TPU. PyTorch XLA Deep Learning Containers are available at gcr.io/deeplearning-platform-release/pytorch-xla, preconfigured with everything you need to use PyTorch on TPUs.

Now that we’ve introduced the concepts, let’s walk through how to get started. We will demonstrate how to setup the Cloud infrastructure, a notebook instance that is connected to a TPU node. Then, we will show how to use this TPU node in our training from PyTorch Lightning. 

Setup your notebook instance

The first step is setting up the notebook instance. From the Cloud Console, go to AI Platform > Notebooks. Select New Instance, then Customize instance, so that we can provide more specific configuration details.

Customize instance.jpg

Next, select a region and zone. After that, the key inputs are:

Select CREATE to begin provisioning your notebook instance.

If newer PyTorch XLA images are available, you can feel free to use those. Just make sure that the version matches the version of the TPU node that we’ll create next. You can browse available images at gcr.io/deeplearning-platform-release.

create notebook instance.jpg

After the notebook has been provisioned, select OPEN JUPYTERLAB to access the JupyterLab environment. If you’d like to access the sample for this tutorial, you can open a new terminal (File > New > Terminal), and then run:

The left sidebar will refresh after a few moments. You’ll find the sample within ai-platform samples > notebooks > samples > pytorch > lightning.

Setup your TPU node

Next, let’s provision a TPU node that we can use from our notebook instance. From the Cloud Console, go to Compute Engine > TPUs. Select Create TPU Node, and then choose a name of your choice. Then, select a Zone and TPU type, keeping in mind that TPU availability varies per region.

Make sure to select a TPU software version that matches the version you selected for your notebook, in this case pytorch-1.8. Also, for the purposes of this tutorial, select datalab-network for the Network, so that you can access your TPU directly from the notebook instance without configuring any networking settings.

create cloud tpu.jpg

Connect your notebook instance to the TPU node

After your TPU has been provisioned, go back to the sample notebook. There are a couple optional cells for TPU configuration that you can uncomment. Let’s walk through these.

First, let’s check the IP address of the Cloud TPU you created. Make sure to update –zone with your TPU zone.

Make note of a couple items here:

  • ACCELERATOR_TYPE: v3-8 tells us the TPU type and number of cores. In this case, we created a v3 with 8 cores.

  • NETWORK_ENDPOINTS: we’ll need to include this IP address in an environment variable, so that we can communicate with the TPU node. You can set this variable in the sample notebook, and this it will be exported in this line:

Basically, exporting that one environment variable is all that’s required!

Training your model with TPUs

Lightning helps you organize your PyTorch code by extending classes such as LightningModule and LightningDataModule. The model code contained with your LightningModule can be reused across different types of hardware.

The Lightning Trainer class manages the training process. Not only does it handle standard training tasks such as iterating over batches of data, calculating losses, and so on, it takes care of distributed training! It uses a PyTorch DistributedDataSampler to distribute the right data to each TPU core. It also leverages PyTorch’s DistributedDataParallel strategy that replicates the model across each core, passes in the appropriate inputs, and manages gradient communication between each core.

When creating the Trainer, you can use the tpu_cores argument to configure TPU support. You can either pass in the number of TPU cores or a specific core you’d like to use:

trainer = Trainer.jpg

You can use the Brain Floating Point Format, or bfloat16, by setting the environment variable XLA_USE_BF16=1. bfloat16 has worked as well as the 32-bit floating point format in practice, while reducing memory usage and increasing performance. For more details, see this blog post: BFloat16: The secret to high performance on Cloud TPUs.

To begin training, all you need to do is call the fit() method:

trainer.fit(model).jpg

You will then have a trained PyTorch model that you can use for inference, or to save to a model file for production deployment. You’ll see how to do that and more in the sample notebook, which you can directly open in AI Platform Notebooks.

In this blog post, we’ve seen how PyTorch Lightning running on Google Cloud Platform makes training on TPUs a breeze. We showed how to configure a TPU node and connect it to a JupyterLab notebook instance. Then, we leveraged standard PyTorch distributed training across TPU cores, by using the same, reusable model code that works on any hardware. With the power of TPUs at your fingertips, what will you solve next?

Related Article

PyTorch / XLA now generally available on Cloud TPUs

PyTorch is now GA on Google Cloud TPUs

Read Article

Read More

A recipe for building a conversational AI team

Contact center virtual agents, such as chatbots or voice bots, leverage the power of artificial intelligence (AI) to help businesses connect with their customers and answer questions round the clock, regardless of request volume. AI tools will only become more critical for streamlining customer experience processes in the future. 

Are you convinced about the benefits of conversational AI for your contact center but don’t know where to start? 

As UX researchers who are part of the team behind Google Contact Center AI (CCAI), we spend a lot of time with our customers and have a unique perspective on what’s needed to develop and manage virtual agents correctly. Although there is no one-size-fits-all for creating a team, we have discovered six typical roles during our numerous hours of studying contact centers. In this post, we’ll discuss each role and the additional stakeholders that are critical ingredients for getting conversational AI right. 

All engineers — the most common pitfall

Before we jump in, let’s quickly discuss why this topic is so important. Too often, we see teams made up of all engineers or a product manager responsible for doing everything. A potential reason for this is that many companies underestimate how hard it is to implement a virtual agent that can interact with a customer appropriately and naturally. More often than not, the flow of conversation becomes too nuanced to interpret. 

We even see flow mistakes in simple cases, for example, at the end of a conversation:

User:       Goodbye.
Virtual agent:         I think you said Goodbye. Do you want to restore the conversation?
User:       (What?)   [user quits the chat]

This type of misunderstanding often leaves people feeling confused or disappointed with the experience, which is a direct extension of the company. In more severe cases, inadequate attention to virtual agent design can harm your brand credibility or even cause a PR crisis. That’s why we also encourage our customers to build conversational AI from a user-centric perspective—and that requires assembling the right team. 

The core roles needed for conversational AI 

When building a virtual agent for the first time, you have to create a team with the right blend of skills, striking a balance between engineering, user experience, and data science. Through our own experience, we have identified the following three core roles:

1) Conversational Architect
A Conversational Architect (CA) is an expert at designing conversations. Like an architect designing a house, they create blueprints for the virtual agent to use when interacting with customers. They leverage professional human language skills to bring natural human speech patterns to the human to virtual agent interaction flows. For example, a virtual agent’s language should be supportive in its content, style and tone, as well as use the correct level of formality needed for a specific context. In addition, conversational architects should have a clear understanding of the product requirements and customers’ needs, working with business stakeholders to:

  • Gather customer requirements

  • Define use cases

  • Design a human-to-virtual-agent conversation flow iteratively

Let’s consider a chatbot created for travel booking and managing existing reservations. To design an interaction flow that fits the business logic, legal terms, and domain specifications, it’s necessary to communicate with multiple parties. In this particular case, a CA would be responsible for working with business stakeholders to define the essential details, such as destination, dates, or number of people traveling, needed to create a new booking. Also, CAs often transfer the conversation flow design into a chatbot or a voice bot through a bot creation platform. 

2) Bot developer
If a CA designs and manages the conversation between humans and virtual agents, then bot developers are in charge of ensuring that a virtual agent has ability to conduct complex actions, such as checking available dates for available flights. Bot developers are also responsible for supporting service integration and any additional customized UI or IVR (Interactive Voice Response) implementations.

3) Quality Assurance Testers
Due to the nature of the iterative and incremental conversational AI development and maintenance life cycle, another critical role identified from best practices is Quality Assurance (QA) tester. This role is often easily overlooked, especially in the early stages of the development process. Quality assurance testers are responsible for testing conversational AI against pre-defined use cases. Those use cases or scripts are typically created based on the design from the CA or through analysis of the available conversation data. Any breakdowns in a conversation, unexpected user experiences, or mismatched targeted flows will be discovered by QA and reported to the team. QA testers can also identify new cases or problematic flows to assist the CA in refining the conversation design. 

The good-to-have roles that support strong conversational AI development

Besides the three core roles, strong conversational AI teams typically include the following good-to-have roles:

  • Product manager

  • Copywriter 

  • Data analyst 

Product managers help conversational architects define and prioritize use cases, manage the development life cycle, and communicate with multiple teams. 

Copywriters come into play after a CA has defined the interaction flow, with one primary purpose—to ensure the virtual agent’s content quality. It’s hard for conversational AI technology to understand human conversations without training data. People often use different expressions to say things, and based on what they say, we can match intent. 

In Google Cloud Dialogflow, we call these training phrases—or the different expressions people might use to say things. For example, users might use the expression “Get a pizza” or “Order pizza” for the phrase “I want a pizza.” Our customers often use call center log data or copywriters to help them create and define training phrases. 

“Designing the conversation is one piece, but writing the responses is different—copywriters can help think through the persona of a virtual agent.” — Pavel Sirotin, Conversational AI Incubator Manager

It’s critical for creating complex virtual agents with tons of training phrases and responses to have a copywriter on the team managing all the content. For example, if someone says “I want to book a flight to Paris,” a copywriter would have to come up with at least 10 to 15 other expressions that a person might use to say the same thing, including:

  •          “I would like to book a flight to Paris”

  •          “I’d like a flight to Paris”

  •          “I plan to travel to Paris and need to book a flight” 

Once a chatbot or voice bot is launched, data analysts can then define and monitor key metrics, analyze failure root causes, and set up A/B testing for experimental features.

Other crucial stakeholders to consider

In addition to the roles that directly contribute to conversational AI development, we also see other stakeholders with varying degrees of involvement. Legal advisors can provide a comprehensive understanding of all regulations needed to define the project scope and use cases. We’ve also seen business advisors from the marketing team review the virtual agent interactions according to the updated business rules and strategies. 

To help discover the most impactful call use cases, teams will often initiate research conversations with call center managers. Managers have a stronger understanding of the interactions best suited for replacement or augmentation due to their knowledge about call volume and use case complexity. 

Call center agents are also often underestimated and overlooked when it comes to developing conversational AI. We’ve witnessed many successful examples of call center agents collaborating either as direct conversational AI team members or acting as consultants. Their knowledge from the field, including the familiarity of the business rules and their innumerable practices with customers in real conversations, is extremely valuable for conversation flow design and training phrase writing. For example, in the flight booking use case, call center agents could help identify what related one-off questions are asked most frequently by customers. 

Getting the recipe right

There is no single, works-every-time formula when it comes to creating a conversational AI team. It comes down to finding the mix that works best for your priorities, the stage of development at your organization, your resources, and more. At the same time, following our recipe for the core roles, good-to-have supporting roles, and other key stakeholders to consider will put you on the right path to building a world-class team. 

Conversational AI development is not magic—it’s collaborative work. With Google Contact Center AI Dialogflow CX, enterprises can now build advanced virtual agents using intuitive Visual Flow builder with less development effort.  Read this to learn more about using Dialogflow CX to build virtual agents.

Related Article

Respond to customers faster and more accurately with Dialogflow CX

New Dialogflow CX Virtual Agents can jumpstart your contact center operational efficiency goals, drive CSAT up and take care of your huma…

Read Article

Read More

How to deploy Certificate Authority Service

Certificate Authority Service (CAS) is a highly available, scalable Google Cloud service that enables you to simplify, automate, and customize the deployment, management, and security of private certificate authorities (CA). As it nears general availability, we want to provide guidance on how to deploy the service in real world scenarios. Today we’re releasing a whitepaper about CAS that explains exactly that. And if you want to learn more about how and why CAS was built, we have a paper on that too.

“How to deploy a secure and reliable public key infrastructure with Google Cloud Certificate Authority Service” (written by Mark Cooper of PKI Solutions and Anoosh Saboori of Google Cloud ) covers security and architectural recommendations for the use of the Google Cloud CAS by organizations, and describes critical concepts for securing and deploying a PKI based on CAS. 

The purpose of a public key infrastructure (PKI) to issue certificates is largely dependent on the environment in which the PKI-issued certificates will be used. For common internet-facing services, such as a website or host where visitors to the site are largely unknown to the host, a certificate that is trusted by the visitor is required to ensure a seamless validation of the host. If a visitor’s browser hasn’t been configured to trust the PKI from which the certificate was issued, an error will occur. To facilitate this process, publicly trusted certificate authorities issue certificates that can be broadly trusted throughout the world. However, their structure, identity requirements, certificate restrictions, and certificate cost make them ineffective for certificate needs within an organizational or private ecosystem, such as the internet of things (IoT) or DevOps. 

Organizations that have a need for internally trusted certificates and little to no need for externally trusted certificates can have more flexibility, control, and security in their certificates without a per-certificate charge from commercial providers. 

A private PKI can be configured to issue the certificates an organization needs for a wide range of use cases, and can be configured to do so on a large scale, automated basis. Additionally, an organization can be assured that externally issued certificates cannot be used to access or connect to organizational resources.

The Google Cloud Certificate Authority Service (CAS) allows organizations to establish, secure and operate their own private PKI. Certificates issued by CAS will be trusted only by the devices and services an organization configures to trust the PKI.

Here are our favorite quotes from the paper:

  • “CAS enables organizations to flexibly expand, integrate or establish a PKI for their needs. CAS can be used to establish and operate as an organization’s entire PKI or can be used to act as one or more CA components in the PKI along with on-premises or other CAs.”

  • “There are several architectures that could be implemented to achieve goals within your organization and your PKI: Cloud root CA, cloud issuing CA and others” 

  • “Providing a dispersed and highly available PKI for your organization can be greatly simplified through CAS Regionalization. When deploying your CA, you can easily specify the location of your CA.”

  • “CAS provides two operational service tiers for a CA – DevOps and Enterprise. These two tiers provide organizations with a balance of performance and security based on operational requirements.”

cas (1).jpg

Related Article

New whitepaper: Scaling certificate management with Certificate Authority Service

This whitepaper explains how organizations can more easily manage devices with Google Cloud’s Certificate Authority Service

Read Article

Read More

Leaf Space: Enabling next-gen satellites on Google Cloud

There is a revolution happening in the space industry. Spurred on by low rocket launch costs, component miniaturization, and digitalization, more than 15,000 satellites will launch over the next decade1—more than have been launched throughout the prior six decades—studying the Earth’s weather and environment, helping people and things communicate in remote locations, monitoring critical infrastructure, backhauling cellular traffic, and performing other important tasks. And if all goes as planned, Leaf Space’s “Ground Station as a Service” which runs on Google Cloud will be there to help.

Most of these new satellites will be launched into low Earth orbit (LEO). Unlike traditional broadcast TV satellites, which operate in a geostationary (GEO) orbit approximately 36,000 kilometers from the Earth’s surface, LEO satellites are much closer to Earth, typically at an altitude of 500 to 2,000 kilometers. For communications missions, this reduces the latency, or the amount of time that it takes a signal to travel from the ground to the satellite and back, and increases the capacity density, or the number of bits per square kilometer that the satellite can deliver. For Earth observation satellites, this increases the resolution of images or other observations that the satellite is making. LEO satellites are also cheaper and faster to manufacture and launch.

However, there is a downside to using LEO. Unlike GEO satellites which appear to be fixed in the sky, LEO satellites move relative to the Earth’s surface. As a result, for a specific LEO satellite, there is no single ground antenna that will always have that satellite within its field of view. Uninterrupted connectivity between the ground and the satellite requires multiple antennas that are distributed around the world. The number of antennas can be reduced if interruptions to space-to-ground communications are allowed, but even in that scenario, several sites are desirable.

This creates a problem for new satellite operators. Launching even one satellite requires a worldwide network of antennas, and most of the time, those antennas will be idle when the satellite is not overhead.

Ground Station as a Service

Enter companies like Leaf Space. Since it was founded in 2014, Leaf Space’s mission has been to simplify access to space with global infrastructure composed of antennas, processing equipment, and software that it offers as a service to satellite operators. A satellite operator can lease time on Leaf Space’s ground network, and when a satellite is within its field of view, use Leaf Space’s antennas and other equipment to communicate between the satellite and ground. And because an antenna can be shared among many satellites and even satellite operators via a reservation system (much like a conference room that’s reserved by different teams over the course of the day), this lowers operating costs by creating a more efficient utilization of resources. This model is known as Ground Station as a Service (GSaaS). Today, Leaf Space operates a network of eight such GSaaS stations across Europe and New Zealand with plans to expand around the world.

Leaf Space software operating on Google Cloud

Network Cloud Engine (NCE) is the brain of Leaf Space’s GSaaS solution. 

NCE manages multiple satellite missions by ingesting relevant mission constraints, automatically optimizing a schedule of contacts between the satellites and the antennas, orchestrating the activity of the network by automatically configuring signal processing equipment at the ground stations, and enabling control and visibility for satellite mission control center operations teams.

NCE orchestrates the entire ground station network operation, while edge resources are utilized to handle baseband processing (the process needed to extract bits and bytes from an RF signal). You can see the flow of data from a satellite to its mission control center (and vice versa) in the diagram below.

Network Cloud Engine.jpg

NCE runs entirely on Google Cloud. A member of Google Cloud’s startup program, Leaf Space chose Google Cloud for the wealth and maturity of its services, its worldwide regions, and its high-speed network backbone. NCE uses Google Cloud services for network connectivity, data transfer, processing, and software control and orchestration. These products enabled Leaf Space’s engineering team to save several months of development time relative to implementing these services from scratch, start commercial operations and continuously roll-out upgrades and new features on a weekly basis. 

Key solution design decisions and performance metrics

In designing NCE, Leaf Space took advantage of Google Cloud services in order to establish a secure, reliable, scalable, easy to maintain and efficient system by virtualizing major components of a typical ground station network backbone and avoiding any human-in-the-loop process. 

NCE is composed of several components. The main ones, such as scheduling, ground station control, data transfer, routing, and APIs are built on Google Kubernetes Engine (GKE). Additional specific tasks are handled through Cloud Functions, Cloud Load Balancer and Compute Engine.

Building the NCE on Google Cloud has enabled Leaf Space to achieve the following objectives:

  • Leverage automated continuous deployment via Cloud Build and source repositories

  • Utilize a multi-server distributed system with liveness probes to ensure zero downtime

  • Load-balance traffic with fast autoscaling for high loads 

  • Avoid wasting compute resources for low-load services

  • Eliminate operating system maintenance tasks, allowing more focus on development

Thanks to Google Cloud services, Leaf Space also reduced time-to-market of new features from weeks to days, leveraging all the automated tools for code management: any time a new tag is pushed, the code is validated, a docker container is built and set in production on GKE. A key advantage of this approach is that the team can deploy new capabilities with zero downtime, a significant advantage for a system that must run at high SLAs.

The Leaf Space solution utilizes Cloud Function and Kubernetes Engines, Google products that are tightly integrated with other services such as Pub/Sub and Cloud Storage. It decreases time to process logs, creating alerts to monitor the GSaaS network and providing visual analysis of the data received from the satellites.

The solution is inherently scalable, making it easy for Leaf Space to add new ground stations or new customers to the production environment and to handle surges in demand.

A single NCE ground station and user running on Google Cloud.jpg
A single NCE ground station and user running on Google Cloud.

User experience with using cloud-powered GSaaS

With Google Cloud services such GKE, Scheduler, Redis Cloud Memory Store, Pub/Sub and CloudSQL, Leaf Space was able to create a GSaaS solution that customers report is easy and straightforward to use.

User experience with using cloud-powered GSaaS.jpg

To access the system, a user simply provides their spacecraft details, such as orbital parameters, launch date, and baseband configuration. NCE then creates a user account and configures the GSaaS network, including accounting for any applicable regulatory requirements, such as the user’s spectrum license. The mission constraints that the automatic scheduling service needs are set through the API, and NCE then creates an optimized schedule for communications between the satellite and the antenna network. 

When it comes time to establish a link between the satellite and the ground, NCE spins up the edge baseband processing chain and enables the data routing between the active ground station and the user interface. Any packets that are received, demodulated, and decoded are directly forwarded in real-time  to the user or the satellite.

Here is the video of the demo that shows an antenna moving and data flowing.

What happens to the downlinked data?

Once the data reaches the user, customers can perform further processing and extract useful information from it, for example performing weather monitoring from GPS occultation sensors, ship detection from a SAR (Synthetic Aperture Radar) acquisition or deforestation trend analysis from optical images. All these analyses can be done directly in the cloud environment by the user using Google Cloud Data Analytics, Artificial Intelligence and Machine Learning services. The ultimate end-product can then be easily stored and distributed to end customers. 

In short, Google Cloud provides an efficient way for Leaf Space to provide GSaaS services to the space ecosystem and further open the doors for development of the space economy. Having the entire processing chain in the cloud from the acquisition phase (through the GSaaS) to data analytics and distribution significantly lowers the delivery latencies and allows efficient distribution of the data. Together, Leaf Space and Google Cloud look forward to enabling the next generation of LEO satellites.  

If you want to learn more about how Google Cloud can help your startup, visit our startup page to learn more or apply for our Startup Program, and sign up for our monthly startup newsletter to get a peek at our community activities, digital events, special offers, and more.


A big thank you to Jai Dialani, Leaf Space Sr. Business Developer and the entire Leaf Space team, for creating this solution and your contributions to this blog post.

1. Euroconsult study; https://www.euroconsult-ec.com/research/WS319_free_extract_2019.pdf

Related Article

GlideFinder: How we built a platform on Google Cloud that can monitor wildfires

Dmitry Kryuk, founder and CTO of GlideFinder, shares how they built a platform that can locate wildfires, alert subscribers, and provide …

Read Article

Read More

Creating a SQL Server instance integrated with Active Directory using Google Cloud SQL

SQL Server instances in Google Cloud SQL now integrate with Microsoft Active Directory (AD) as a pre-GA feature that you can try out for yourself right now. This post describes the basic steps required to create a SQL Server instance with this new functionality. If you’re looking for complete details see the official documentation.

Create a domain with Managed Service for Microsoft Active Directory

The first step is to create a domain with Managed Service for Microsoft AD. This can be done easily via the “Managed Microsoft Active Directory” section in the Google Cloud Console. Click the “CREATE NEW AD DOMAIN” button and enter the following information:

  • Specify a Fully Qualified Domain Name

  • Example: ad.mydomain.com

  • Select a VPC network

    • Example: default

  • Specify a suitable CIDR range for the AD domain 

    • Example: 10.1.0.0/24

  • Select a Region where the AD domain should be located
    • Example: us-central1

  • Specify an admin name for the AD domain’s delegated administrator

    • Example: mydomain-admin

    With all of that information provided it should look something like this:

    image3.png

    Click “CREATE DOMAIN” to complete the process of creating the AD domain. You’ll have to be patient for a bit since it can take up 60 minutes for the domain to be available for use. Once it is ready it will look like this in the list of domains:

    image5.png

    The final step to configure the AD domain is to set the “Delegated admin” password. Click the domain name in the list of domains to go to its details page. On that page, click the “SET PASSWORD” link and make a note of the password which we’ll use in a later step of this blog post.

    image25.png

    Create a SQL Server instance with Windows Authentication

    After your AD domain is available for use you can start creating SQL Server instances that utilize the AD domain to enable Windows Authentication with AD-based identities. Go ahead and try out creating a new SQL Server instance by going to the Cloud SQL section of the Google Cloud Console. Click the “CREATE INSTANCE” button. Then click “Choose SQL Server” and enter the following information:

    • Specify an Instance ID

    • Example: sql-server-with-ad

  • Enter a password for the ‘sqlserver’ user and make a note of this for use in a later step.

  • Select a Database version. All versions will work with Active Directory. I selected  “SQL Server 2017 Standard”.

  • Select a Region where the instance should be located. It is recommended that you locate the SQL Server instance in the same region as the AD domain for the lowest network latency and the best performance.

    • Example: us-central1

  • Select whether the instance should be located in a “Single zone” or “Multiple zones”. For production instances “Multiple zones” is recommended to achieve high availability which provides automatic failover to another zone within your selected region. 

  • Select “SHOW CONFIGURATION OPTIONS”

    • Click the “Connections” section to expand it and select both the “Private IP” and “Public IP” options. 

    • Note, if this is the first time creating a Private IP for the “Network” you select, you’ll be prompted that a “Private service connection is required”.

    image6.png
    • Click “SET UP CONNECTION” and select “Use an automatically allocated IP range” in the “Enable Service Networking API” dialog that appears: 

    image8.png

    Then click the “Continue” button to complete the process.

    • Back in the instance “Configuration Options”, click the “Authentication” section to expand it.

    • From the dropdown menu for joining a managed Active Directory domain, select the domain that you created in the first step of this blog post.

    • Cloud SQL will automatically create a Per-Product, Per-Project Service account used for authentication to the instance. You will be prompted to grant the service account the “managedidentities.sqlintegrator” IAM role.

    With all of that information provided the create instance form should look something like this:

    image27.png

    Click “CREATE INSTANCE” to complete the process of creating the instance. Once the instance is created you should see the new instance’s overview page that should look something like this:

    image2.png

    Well done! You now have a SQL Server instance on Google Cloud that you can log into using Windows Authentication with an AD-based identity. 

    Connecting to the SQL Server instance using Windows authentication

    Let’s go ahead and confirm that everything works as expected. To do so I’ll create a Windows Server 2019 VM using Google Compute Engine. Using it I can add a new user to the Managed Active Directory, give that user access to the SQL Server instance in Cloud SQL and then connect to the SQL Server instance as that user with Windows Authentication.

    Windows Server VM’s are easy to create using Google Compute Engine‘s marketplace.

    image16.png

    Searching for “Windows Server 2019” in the marketplace returns many options to choose from.

    I’ll create a VM using the “Secured SQL Server 2017 Standard on Windows Server 2019” option. 

    image23.png

    After choosing the VM option, click the “Launch” button and you’ll be taken to the instance creation page. Review the settings of the VM instance to be created, especially the “Network” selection under the “Networking” section, to ensure that the selected “Network” is a network that is included in your Active Directory domain. Then scroll to the bottom of the instance creation page and click the “Create” button. 

    Once the VM creation process is complete, you’ll be taken to the instance detail page. Click the “Set Windows password” button to set a password to use for logging into the VM. Then use the “Remote Desktop Protocol” (RDP) button to login in to the VM.

    image9.png

    Once you are logged into the VM instance you can now join the VM to the Managed Active Directory domain. Click the “Windows” icon on the bottom left of the screen, type “Control Panel”, and then press ENTER. Navigate to “System and Security”, and then click “System”. Under Computer name, domain, and workgroup settings, click “Change settings”.

    image26.png

    Then click the “Change” button in the System Properties dialog box.

    image7.png

    Enter the name of your Managed Active Directory in the “Domain” input text box and click the “OK” button.

    image20.png

    A welcome to the domain message should appear that looks something like this:

    image22.png

    Now that we have a Windows Server VM in our Managed Active Directory, we can add a User. To do so, we’ll need to install the necessary Remote Server Administration Tools (RSAT). Open “Server Manager” and click the “Manage” menu item and select “Add Roles and Features” wizard.

    image14.png
    1. In the wizard, advance to the Select features page. You can select Features from the sidebar or select Next until you reach it.

    2. On the Select features page, in the Features list, expand Remote Server Administration Tools, and then expand Role Administration Tools.

    3. Under Role Administration Tools, select AD DS and AD LDS Tools. This enables the following features:

    • Active Directory module for Windows PowerShell

    • AD LDS Snap-Ins and Command-Line Tools

    • Active Directory Administrative Center

    • AD DS Snap-Ins and Command-Line Tools

  • Optional: You may also want to enable the following features:

    • Group Policy Management

    • DNS Server Tools (under Role Administration Tools)

  • Close the wizard.

  • Excellent! Now the Windows Server VM is enabled with the tools to add Users to the Managed AD domain. Remember the “Delegated admin” that we specified when we created the AD domain at the beginning of this blog post? Well now it’s time to use it. We’ll log out of the Windows Server VM and log back in as the “Delegated admin”. Log out of the Windows VM by clicking the “Windows” icon at the bottom left of the VM instance screen, then clicking the “power” icon and selecting “Disconnect”.

    image17.png

    Back on the Google Compute Engine instances page click the “RDP” button to log back into the Window Server VM, but this time do so with the AD domain “Delegated admin” username and password.

    image10.png

    Once logged in, open “Server Manager”, click the “Tools” menu item and select  “Active Directory Users and Computers”. 

    image19.png

    In the “Active Directory Users and Computers” dialog window that appears, expand the ad.mydomain.com item and click the “Cloud” sub-item. Then Click the “Create a New User in the current container” icon to create a new User.

    image11.png

    Enter a First and Last name for the User along with a logon name and click the “Next” button.

    Enter and confirm a Password for the User, then click the “Next” button again. Finally click the “Finish” button in the confirmation dialog box to create the User.

    Great! Now we’ve got a new AD domain User. But they still need to be granted access to the SQL Server instance on Cloud SQL. We can do that with Azure Data Studio. Open a browser on the Windows Server VM and go to the Azure Data Studio download page

    image1.png

    Click the “System Installer” link to initiate the download, and click the “Run” button in the download dialog window that appears

    image13.png

    After the Azure Data Studio installation wizard completes, with “Launch Azure Data Studio” checkbox selected, click the “Finish” button to open the program. In the start screen Azure Data Studio click the “New Connection” link.

    image4.png

    For “Server” enter the “Active Directory FQDN (Private)” value from the Cloud SQL – SQL Server instance details page:

    • Example:  private.sql-server-with-ad.us-central1.your-new-project.cloudsql.ad.mydomain.com

    For Authentication type select “SQL Login”

    For the “User name” enter “sqlserver” and for “Password” enter the password for the “sqlserver” User that you specified when you created the SQL Server instance.

    With all of that information specified the Connection Details should look something like this:

    image24.png

    Click the “Connect” button to connect to the SQL Server instance. Once the connection is made, click the “New query” link and enter the following query:

    CREATE LOGIN [ad.mydomain.comclouduser] FROM WINDOWS

    and click the “Run” button.

    image18.png

    Wonderful! We’re completely done with the setup and now it is time for the moment of truth.

    Let’s test out connecting to our SQL Server instance as our new AD domain User via Windows Authentication. Close the Azure Data Studio. Now we’ll reopen Azure Data Studio but we’ll do so as the new AD domain User. Click the “Windows” icon at the bottom left of the VM instance screen and type “Azure Data Studio”. Right-click the “Azure Data Studio” icon and select “Run as a different user”.

    image28.png

    Enter the “User name” and “Password” for the AD domain User in the dialog box that appears.

    image21.png

    On the Azure Data Studio start page, click the “New Connection” link.  

    For “Server” enter the “Active Directory FQDN (Private)” value from the Cloud SQL – SQL Server instance details page:

    • Example:  private.sql-server-with-ad.us-central1.your-new-project.cloudsql.ad.mydomain.com

    For Authentication type select “Windows Authentication”.

    image12.png

    Click the “Connect” button… and smile because your SQL Server on Cloud SQL is now integrated with Managed Active Directory.

    image15.png

    Getting started

    Windows Authentication for Cloud SQL for SQL Server is available in preview for all customers today!  Learn more and get started.

    Read More

    Recovering global wildlife populations using ML

    Wildlife provides critical benefits to support nature and people. Unfortunately, wildlife is slowly but surely disappearing from our planet and we lack reliable and up-to-date information to understand and prevent this loss. By harnessing the power of technology and science, we can unite millions of photos from [motion sensored cameras] around the world and reveal how wildlife is faring, in near real-time…and make better decisions wildlifeinsights.org/about
    Wildlife Insights

    Case study background

    Google partnered with several leading conservation organizations to build a project known as Wildlife Insights, which is a web app that enables people to upload, manage, and identify images of wildlife from camera traps. The intention is for anyone in the world who wishes to protect wildlife populations and take inventory of their health, to do so in a non-invasive way. 

    The tricky part, however, is reviewing each of the millions of photos and identifying every species, and so this is where Machine Learning is of great help with this big data problem. Themodels built by the inter-organizational collaboration, presently classifies up to 732 species and includes region-based logic such as preventing a camera trap in Asia—for example—from classifying an African elephant (using geo-fencing). These models have been in development for several years, and are continuously being evolved to serve animals all over the globe. You can learn more about it here.

    Wildlife Insights. Homepage

    This worldwide collaboration took a lot of work, but much of the basic technology used isavailable to you at WildlifeLifeInsights.org!

    And, for those interested in wanting to learn how to build a basic image classifier inspired from this wildlife project, please continue reading. You can also go deeper by trying out our sample tutorial at the end, which contains the code we used, and lets you run it interactively in a step-by-step notebook (you can click the “play” icon at each step to run each process).

    Wildlife Insights. Notebook

    How to build an image classification model to protect wildlife

    people-planet-ai

    We’re launching a Google Cloud series called “People and Planet AI” to empower users to build amazing apps that can help solve complex social and environmental challenges inspired from real case studies such as the project above. In this first episode, we show you how to use Google Cloud’s Big Data & ML capabilities to automatically classify images of animals from camera traps. You can check out the video here.

    Requirements to get started

    Hardware

    You would require two hardware components:

    • Camera trap(s) → to take photos (which we also strongly recommend you share by uploading on Wildlife Insights to help build a global observation network of wildlife). 

    • Micro controller(s) (like a Raspberry Pi or Arduino) →  to serve as a small linux computer for each camera. It hosts the ML model locally and does the heavy lifting of labeling images by species as well as omitting blank images that aren’t helpful.

    With these two tools, the goal is to have the labeled images then uploaded via an internet connection. This can be done over a cellular network. However, in remote areas you can carry the microcontroller to a wifi enabled area periodically to do the transfer.

    👌 2 Friendly tips

    1. 🐻 In order for the microcontroller to send the images it’s classified over the internet, we recommend using Cloud PubSub which publishes the images as messages to an endpoint in your cloud infrastructure. PubSub helps send and receive messages between independent applications over the internet.

    2. 🐻 when managing dozens or hundreds of camera traps and their micro controllers, you can leverage Cloud IoT core to upload your ML classification model on all these devices simultaneously—especially as you update the model with new data from the field.

    Wildlife Insights Hardware

    Software

    The model is trained and output via Google Cloud using a free camera trap dataset from lila.science. It costs less than $30 each time to train the model as of the publishing of this article (the granular breakdown is listed at the bottom of this article). 

    👌 TIP: you can retrain once or twice a year, depending on how many images of new species you collect, and/or how frequently you  want to upgrade the image classification model.

    Wildlife Insights Software

    Image selection for training and validation of an ML model

    The two products that perform most of the heavy lifting are Dataflow and AI Platform Unified. Dataflow, is a serverless data processing service that can process very large amounts of data needed for Machine Learning activities. In this scenario we use it to run 2 jobs:

    Dataflow job 1: creates a database in BigQuery from image metadata. The 2 columns of the metadata collected are used from the Camera Traps database mentioned above. This is a one time setup:

    • category: The species we want to predict, this is our label.

    • file_name: The path where the image file is located.

    We will also do some very basic data cleaning like discarding rows with categories that are note useful like:  #ref!, empty, unidentifiable, unidentified, unknown.

    Dataflow job 2: makes a list of images that would be great for creating a balanced dataset. This is informed by our requirements for selecting images that have a minimum and maximum amount of species per category

    Wildlife Insights Dataflow Job 2

    This is to ensure we train abias-free model that doesn’t include a species it has too little information about, and is later unable to classify it correctly. 

    In emoji examples, the data looks skewed like this: 

    Wildlife Insights Emoji

    When this dataset balancing act is completed, Dataflow proceeds to download the actual images into Cloud Storage from lila.science.  This enables us to only store and process the images that are relevant, and not the entire dataset, keeping computation time and costs to a minimum.

    Building the ML model

    We build a model with AI Platform Unified, which uses AutoML and the images we stored in Cloud Storage. This is the part where historically speaking, when training a model it could take days, now it takes 2-3 hours. Once the model is ready, you can download it and import it into each micro-controller to begin classifying the images locally. To quickly show you how this looks, we will enable the model to live in the cloud, let’s see what the model thinks about some of these images.

    Building the ML model YouTube video

    Export model for a microcontroller

    The prior example was for exporting the model as an online endpoint, however in order to export your model and then download it into a microcontroller, here are the instructions.

    Wildlife Insights Microcontroller

    Try it out

    You can use WildLifeInsights.org’s free webapp to upload and classify images today. However if you would like to build your own classification model, try out our sample or check out the code in GitHub. It requires no prior knowledge and it contains all the code you need to train an image classification model and then run several predictions in an online notebook format, simply scroll to the bottom and click “open in Colab” per this screenshot:

    Wildlife Insights Colab

    As mentioned earlier, what used to take days, can now compute in 2-3 hours. And so, all you will need for this project is to:

    • ⏱️ Set aside 2 hours (you click run for each step, and the longest part just runs in the background, you simply need to check back after 2 hours to move onto the next and last step) 

    • 👉 Create a free Google Cloud project(you will then delete that project when you are finished to ensure you do not incur additional costs since the online model has an hourly cost).

    (Optional information: *Pricing breakdown)

    The total cost for running this sample in your own Cloud project < $27.61 (will vary slightly on each run. Please note rates are based from the date of the publishing of this article, and could vary).

    Cloud Storage is in free tier(below 10GB). Does not include Cloud IoT Core nor if you wanted a cloud server to be an endpoint to listen to it constantly (hourly charge) to have devices speak to it online anytime.  This option is when you don’t use a microcontroller). It’s a prediction web service.

    • Create images database (~5 minutes wall time): $0.07

    • Total vCPU: 0.115 vCPU hr * $0.056/vCPU hr = $0.0644

    • Total memory: 0.431 GB hr * $0.003557/GB hr = $0.001533067

    • Total HDD PD: 14.376 GB hr * $0.000054/GB hr = $0.000776304

    • Shuffle Data Processed: unknown but probably negligible

  • Train AutoML model (~1 hour wall time): $26.302431355

    • Total vCPU: 13.308 vCPU hr * $0.056/vCPU hr = $0.745248

    • Total memory: 49.907 GB hr * $0.003557/GB hr = $0.177519199

    • Total HDD PD: 3,327.114 GB hr * $0.000054/GB hr = $0.179664156

    • Shuffle Data Processed: unknown but probably negligible

  • AutoML training: $25.20

    • Training: 8 node hrs * $3.15/node hr = $25.20

  • Getting predictions (assuming 1 hr of model deployed time, must undeploy model to stop further charges): $1.25

  • Read More

    Making access to SaaS applications more secure with BeyondCorp Enterprise

    An explosion of SaaS applications over the last decade has fundamentally changed the security landscape of modern enterprises. According to the Cloud Security Threat Report1, the average organization uses hundreds, possibly upwards of 1,000 of SaaS applications, many of these unsanctioned by IT departments, and this number is only forecasted to increase. Today, we see many organizations trying to secure modern SaaS applications with a legacy, network-based approach, where access might only be given if a user is on the corporate network or connecting through a VPN. But conventional castle-and-moat security strategies are no longer adequate; the network from which you access resources is no longer a reliable indicator of trusted access. 

    This paradigm shift has led to increased adoption of the zero trust model, where no person, device, or network enjoys inherent trust. Instead, trust, which allows access to applications and information, must be earned by demonstrating criteria such as identity and other factors, through policies set by administrators.

    Transitioning to a zero trust model is no easy feat, which is why we recently introduced BeyondCorp Enterprise to help our customers with this challenge. BeyondCorp Enterprise allows users to implement a zero trust approach based on the same principles we use at Google and manage access to their SaaS applications hosted on Google Cloud, in other clouds, or on-premises. And now, in light of the increase in remote work, secure access to applications has never been a more relevant conversation.

    BeyondCorp Enterprise makes it easy to enforce granular access policies based on a user’s identity, organizational group, device health, encryption status, geographic origin, form of authentication, and more. But application access is only one part of our zero trust approach; once a user has access to an app, we also want to make sure their data is protected. BeyondCorp Enterprise includes new threat and data protection services, giving users an added layer of security, integrated directly in the browser without the need for an agent.

    Transitioning to a zero trust model can be a journey, but our solutions can help you get started quickly and easily. One way to do this is to think about your deployment and take a targeted approach, for instance, starting with a group of specific users or a set of SaaS apps you want to secure. For instance, you could think about frontline workers who may only need to access a point-of-sale application, or maybe your organization has a large customer service operation and those employees only need to access call center software. These use cases and ones similar are a great first step as they are straightforward and the use of VPN is almost certainly unnecessary.

    Our new whitepaper, “Secure access to SaaS applications with BeyondCorp Enterprise,” outlines common scenarios for IT leaders to consider, and provides guidance for how they can approach each one. As with any new deployment, there are a number of security factors organizations must consider, such as:

    • How to govern zero trust access to sanctioned SaaS applications

    • How to prevent leakage of sensitive data from SaaS applications

    • How to prevent malware transfers and lateral movements via sanctioned applications

    • How to prevent visits to phishing URLs embedded in application content

    We dive deeper into each of these, as well as a selection of other scenarios, in the whitepaper. Read it here, and learn more about BeyondCorp Enterprise in our on-demand overview webinar or our product page.  


    1. Cloud Security Threat Report

    Related Article

    BeyondCorp Enterprise: Introducing a safer era of computing

    The GA of Google’s comprehensive zero trust product offering, BeyondCorp Enterprise, brings this modern, proven technology to organizatio…

    Read Article

    Read More

    Fintech startup, Branch makes data analytics easy with BigQuery

    Editor’s note: Here we take a look at how Branch, a fintech startup, built their data platform with BigQuery and other Google Cloud solutions that democratized data for their analysts and scientists.  

    As a startup in the fintech sector, Branch helps redefine the future of work by building innovative, simple-to-use tech solutions. We’re an employer payments platform, helping businesses provide faster pay and fee-free digital banking to their employees. As head of the Behavioral and Data Science team, I was tapped last year to build out Branch’s team and data platform. I brought my enthusiasm for Google Cloud and its easy-to-use solutions to the first day on the job. 

    We chose Google Cloud for ease-of-use, data & savings 

    I had worked with Google Cloud previously, and one of the primary mandates from our CTO was “Google Cloud-first,” with the larger goal of simplifying unnecessary complexity in the system architecture and controlling the costs associated with being on multiple cloud platforms. 

    From the start, Google Cloud’s suite of solutions supported my vision of how to design a data team. There’s no one-size-fits-all approach. It starts with asking questions: what does Branch need? Which stage are we at? Will we be distributed or centralized? But above all, what parameters in the product will need to be optimized with analytics and data science approaches? With team design, product parameterization is critical. With a product-driven company, the data science team can be most effective by tuning a product’s parameters—for example, a recommendation engine for an ecommerce site is driven by algorithms and underlying models that are updating parameters. “Show X to this type of person but Y to this type of person,” X and Y are the parameters optimized by modeling behavioral patterns.  Data scientists behind the scenes can run models as to how that engine should work, and determine which changes are needed.  

    By focusing on tuning parameters, the team is designed around determining and optimizing an objective function. That of course relies heavily on the data behind it. How do we label the outcome variable? Is a whole labeling service required? Is it clean data with a pipeline that won’t require a lot of engineering work? What data augmentation will be needed? 

    With that data science team design envisioned, I started by focusing on user behavior—deciding how to monitor and track it, how to partner with the product team to ensure it’s in line with the product objectives, then spinning up A/B testing and monitoring. On the optimization side, transaction monitoring is critical in fintech. We need to look for low-probability events and abnormal patterns in the data, and then take action, either reaching out to the user as quickly as possible to inform them, or stopping the transaction directly. In the design phase, we need to determine if these actions need to be done in real-time or after the fact. Is it useful to the user to have that information in real time? For example, if we are working to encourage engagement, and we miss an event or an interaction, it’s not the end of the world. It’s different with a fraud monitoring system, for which you’ve got to be much more strict about real-time notifications.

    Our data infrastructure

    There are many use cases at Branch for data cloud technologies from Google Cloud. One is with “basic” data work. It’s been incredibly easy to use BigQuery, Google’s serverless data warehouse, which is where we’ve replicated all of our SQL databases, and Cloud Scheduler, the fully managed enterprise-grade cron job scheduler. These two tools, working together, make it easy to organize data pipelining. And because of their deep integration, they play well with other Google Cloud solutions like Cloud Composer and Dataform, as well as with services, like Airflow, from other providers. Especially for us as a startup, the whole Google Cloud suite of products accelerates the process of getting established and up and running, so we can perform the “bread-and-butter” work of data science. 

    We also use BigQuery as a holder of heavier stats, and we train our models there, weekly, monthly, nightly, depending on how much data we collect. Then we leverage the messaging and ingestion tool Pub/Sub and its event systems to get the response in real time. We evaluate the output for that model in a Dataproc cluster or Dataform, and run all of that in Python notebooks, which can call out to BigQuery to train a model, or get evaluated and pass the event system through.

    Full integration of data solutions

    At the next level, you need to push data out to your internal teams. We are growing and evolving, so I looked for ways to save on costs during this transition. We do a heavy amount of work in Google Sheets because it integrates well with other Google services, getting data and visuals out to the people who need them; enabling them to access raw data and refresh as needed. 

    Google Groups also makes it easy to restrict access to data tables, which is a vital concern in the fintech space. The infrastructure management and integration of Google Groups make it super useful. If an employee departs the organization, we can easily delete or control their level of access. We can add new employees to a group that has a certain level of rights, or read and write access to the underlying databases. As we grow with Google Cloud, I also envision being able to track the user levels, including who’s running which SQLs and who’s straining the database and raising our costs. 

    A streamlined data science team saves costs

    I’d estimate that Google Cloud’s solutions have saved us the equivalent of one full-time engineer we’d otherwise need to hire to link the various tools together, making sure that they are functional and adding more monitoring. Because of the fully managed features of many of Google Cloud’s products, that work is done for us, and we can focus on expanding our customer products. We’re now 100% Google Cloud for all production systems, having consolidated from IBM, AWS, and other cloud point solutions. 

    For example, Branch is now expanding financial wellness offerings for our customers to encourage better financial behavior through transaction monitoring, forecasting their spend and deposits, and notifying them of risks or anomalies. With those products and others, we’ll be using and benefiting from the speed, scalability, and ease of use of Google Cloud solutions, where they always keep data—and data teams—top of mind. 

    Learn more about Branch. Curious about other use cases for BigQuery? Read how retailers can use BigQuery ML to create demand forecasting models.

    Related Article

    Inventory management with BigQuery and Cloud Run

    Building a simple inventory management system with Cloud Run and BigQuery

    Read Article

    Read More