Born in Detroit, Accelerated with Google

Posted by Ajeet Mirwani, Program Manager, Developer Relations

StockX is a Detroit-based tech leader focused on the large and growing online resale marketplace for sneakers, apparel, accessories, collectibles, and electronics. Its innovative marketplace enables users to anonymously buy and sell high-demand consumer products with stock market-like visibility. StockX employs over 800 people in more than 13 offices and authentication centers around the world, and facilitates sales in more than 200 countries and territories.

StockX has been selected for Google’s Late-Stage Accelerator, which offers specialized programs in the areas of tech, design, product, and people operations to enable high growth startups. This accelerator is built using the fundamentals of the Google for Startups Accelerator that runs across the globe.

Every single item sold on StockX is shipped to one of its six global authentication centers and verified by a human to ensure the item is brand new, authentic, and has no manufacturing defects, providing confidence that resale market transactions are safe and secure.

The partnership between StockX and Google came to light as StockX started looking for technology to enhance its authentication process. This process today is managed by the StockX team with “authenticators” ( i.e. employees who are specially trained at finding fakes, manufacturing defects, etc.) taking on the work.

With this problem statement in mind, we gathered experts from the Google Cloud AI team to help StockX utilize machine learning / AI to improve the speed and accuracy of authentication, spotting which items are fake or have a manufacturing defect. This is a perfect problem for AI – StockX captures large amounts of information about every item and whether it passed or failed authentication, enabling the team to quickly gather training data. StockX and the Accelerator team started collaboration early in the process, planning the project phases together and bringing Google’s experience and expertise in solving these types of problems to bear. The teams meet weekly, sharing data, insights and feedback to enable fast iteration.

Google’s experts in applied machine learning (ML) from the Late-Stage Accelerator have already saved the StockX technical team significant time on model architecture and data management. Both teams are looking forward to moving this collaboration to the next stage of model development, training and serving into production. More to come!

Read More

Developer tips and guides: Common policy violations and how you can avoid them

By Andrew Ahn, Product Manager, Google Play App Safety

At Google Play, we want to foster an ecosystem of safe, engaging, useful, and entertaining apps used and loved by billions of Android users worldwide. That’s why we regularly update and revise our Google Play Developer Policies and Developer Distribution Agreement, detailing the boundaries of app content and functionalities allowed on the platform, as well as providing latest guidance on how developers can promote and monetize apps.

In recent efforts in analyzing apps for policy compliance on Google Play we identified some common mistakes and violations that developers make, and we’re sharing these with the developer community with tips and guides on how to avoid them, mitigating the risks of apps and developer accounts being suspended for violating our policies.

Links that take users back to other apps on the Play Store

One of the most common mistakes we see are apps that have buttons and menus that link out to the Play Store — either to apps by the same developer, or other apps that may be affiliated with the developer, but not being clear that these are ads or promotional links. Without this clarity, apps may get enforced for having deceptive / disguised ads. One of the ways to avoid such mistakes is by explicitly calling these out by labeling the buttons and links as ‘More Apps’, ‘More Games’, ‘Explore’, ‘Check out our other apps’, etc.

Example of app content that link out to app listing on Play

Example of app content that link out to app listing on Play

Spammy app descriptions

Another mistake we frequently observe is where developers ‘stuff’ keywords in the app description in hope for better discoverability and ranking against certain keywords and phrases. Text blocks or lists that contain repetitive or unrelated keywords or references violate our Store Listing and Promotion policy. Writing a clear app description intended and optimized for user’s readability and understanding is one of the best ways to avoid this violation.

Watch this video to learn how to avoid spammy store listings and efforts to artificially boost app visibility.

Abandoned and broken apps

There are apps that have been published by the developers a long time ago, and are no longer being maintained. Abandoned and unmaintained apps often create user experience issues — broken app functionality, for example. Not only are such apps at risk of getting a low star rating and negative user reviews, they will also be flagged as violating the minimum functionality policy. To mitigate the negative impact to the developer reputation and app enforcement, consider unpublishing such apps from the Play Store. Note the updated unpublish action won’t affect existing users who already installed the app, and developers can always choose to re-publish them after addressing the broken experiences.

Example of an abandoned app that provides a broken app experience

Example of an abandoned app that provides a broken app experience

Play icon with graduation cap

Take the ‘Minimum and Broken Functionality Spam’ course on Play Academy

Apps vs. Webview

Lastly, we observe a large volume of app submissions that are just webviews of existing websites. Most of these apps are submitted with a primary purpose of driving traffic rather than providing engaging app experiences to Android users. Such apps are considered webview spam, and are removed from Play. Instead, consider thinking through what users can do or do better with the app than in a web experience and implement relevant features and functionalities that enrich the user experience.

Example of webview without any app functionality

Example of a webview without any app functionality

Play icon with graduation cap

Take the ‘Webview Spam’ course on Play Academy

While the above are one of the most frequent mistakes, make sure to stay up to date with the latest policies by visiting the Play Developer Policy Center. Check out Google Play Academy’s Policy training, including our new Spam courses, and watch our Play PolicyBytes videos to learn more about recent policy updates.

Read More

Introducing the Android for Cars App Library

Posted by Eric Bahna, Product Manager

In August, we announced plans to expand Android Auto’s app ecosystem to enable new navigation, parking, and electric vehicle charging apps. We’ve been hard at work collaborating with our early access partners to test and refine the Android for Cars App Library. Today, we’re releasing the library into an open beta, for any developer to use. This means you’ll now be able to design, develop, and test your navigation, parking or charging app on Android Auto. We’re looking forward to enabling Google Play Store publishing for your beta apps in the coming months.

Android

Three of our early access partners: ChargePoint, SpotHero, and Sygic

The design phase is the time to familiarize yourself with our design guidelines and app quality guidelines. Driver safety is core to our mission and we want to help you optimize your app for the car.

When it comes time to build your app, our new library will hopefully make development easy. Get started with the developer guide and please give us feedback via our public issue tracker.

In the testing phase, see your app come alive on the Desktop Head Unit (DHU), our emulator that lets you simulate a car infotainment display. The DHU now supports multiple screen sizes, displaying information in the instrument cluster, and simulating vehicles with touchpad input.

Android for cars image

The DHU simulating an instrument cluster, a widescreen head unit, and a touchpad

You can get started with the Android for Cars App Library here. We’re excited to see what you build next!

Read More

Learn the steps to build an app that detects crop diseases

Posted by Laurence Moroney, TensorFlow Developer Advocate at Google

On October 16-18, thousands of developers from all over the world are coming together for DevFest 2020, the largest virtual weekend of community-led learning on Google technologies.

For DevFest this year, a few familiar faces from Google and the community came together to show you how to build an app using multiple Google Developer tools to detect crop diseases, from scratch, in just a few minutes. This is one example of how developers can leverage a number of Google tools to solve a real-world problem. Watch the full demo video here or learn more below.

Creating the Android app

Image of Chet Haase

Chet Haase, Android Developer Advocate, begins by creating an Android app that recognizes information about plants. To do that, he needs camera functionality, and also machine learning inference.

The app is written in Kotlin, uses CameraX to take the pictures and MLKit for on-device Machine Learning analysis. The core functionality revolves around taking a picture, analyzing it, and displaying the results.

[Code showing how the app takes a picture, analyzes it, and displays the results.]

MLKIt makes it easy to recognize the contents of an image using its ImageLabeler object, so Chet just grabs a frame from CameraX and uses that. When this succeeds, we receive a collection of ImageLabels, which we turn into text strings and display a toast with the results.

[Demo of what the app detecting that the image is a plant.]

Setting up the Machine Learning model

To dig a little deeper, Gus Martins, Google Developer Advocate for TensorFlow, shows us how to set up a Machine Learning model to detect diseases in bean plants.

Gus uses Google Colab, a cloud-hosted development tool to do transfer learning from an existing ML model hosted on TensorFlow.Hub

He then puts it all together and uses a tool called Tensorflow Lite Model Maker to train the model using our custom dataset.

Setting up the Android app to recognize and build classes

The Model Gus created includes all the metadata needed for Android Studio to recognize it and build classes from it that can run inference on the model using TensorFlow Lite. To do so, Annyce Davis, Google Developer Expert for Android, updates the app to use TensorFlow Lite.

Image of Annyce Davis

She uses the model with an image from the camera to get an inference about a bean leaf to see if it is diseased or not.

Now, when we run our app, instead of telling us it’s looking at a leaf, it can tell us if our bean is healthy or, if not, can give us a diagnosis.

(Demo of the app detecting whether or not the plant is healthy)

Transforming the demo into a successful app using Firebase, Design, and Responsible AI principles

This is just a raw demo. But to transform it into a successful app, Todd Kerpelman, Google Developer Advocate for Firebase, suggests using the Firebase plugin for Android Studio to add some Analytics, so we can find out exactly how our users are interacting with our app.

Image of Toff Kerpelman

There’s a lot of ways to get at this data — it will start showing up in the Firebase dashboard, but one really fun way of viewing this data is to use StreamView, which gives you a real-time sample of what kinds of analytics results we’re seeing.

[Firebase Streamview allows you to view real-time analytics.]

Using Firebase, you could also, for example, add A/B testing to your app to choose the best model for your users; have remote configuration to keep your app up to date; have easy sign-in to your app if you want users to log in, and a whole lot more!

Di Dang, UX Designer & Design Advocate, reminds us that if we were to productize this app, it’s important to keep in mind how our AI design decisions impact users.

Image of Di Dang

For instance, we need to consider if and/or how it makes sense to display confidence intervals. Or consider how you design the onboarding experience to set user expectations for the capabilities and limitations of your ML-based app, which is vital to app adoption and engagement. For more guidance on AI design decisions, check out the People + AI Guidebook.

[You can learn more about AI design decisions at the People & AI Guidebook]

This use case focuses on plant diseases, but for this case and others, where our ML-based predictions intersect with people or communities, we absolutely need to think about responsible AI themes like privacy and fairness. Learn more here.

Building a Progressive Web App

Paul Kinlan, Developer Advocate for Web, reminds us to not forget about the web!

Image of Paul Kinlan

Paul shows us how to build a PWA that allows users to install an app across all platforms, which can combine the camera with TensorFlow.js to integrate Machine Learning to build an amazing experience that runs in the browser – no additional download required.

After setting up the project with a standard layout (with an HTML file, manifest, and Service Worker to make it a PWA) and a data folder that contains our TensorFlow configuration, we’ll wait until all of the JS and CSS has loaded in order to initialize the app. We then set up the camera with our helper object, and load the TensorFlow model. After it becomes active, we can then set up the UI.

The PWA is now ready and waiting for us to use.

PWA image

(The PWA tells us whether or not the plant is healthy – no app download necessary!)

The importance of Open Source

And finally, Puuja Rajan, Google Developer Expert for TensorFlow and Women Techmakers lead, reminds us that we might also want to open source this project, too, so that developers can suggest improvements, optimizations and even additional features by filing an issue or sending a pull request. It’s a great way to get your hard work in front of even more people. You can learn more about starting an Open Source project here.

Image of Pujaa Rajan

In fact, we’ve already open sourced this project, which you can find here.

So now you have the platform for building a real app — with the tooling from Android Studio, CameraX, Jetpack, ML Kit, Colab, TensorFlow, Firebase, Chrome and Google Cloud, you have a lot of things that just work better together. This isn’t a finished project by any means, just a proof of concept for how a minimum viable product with a roadmap to completion can be put together using Google’s Developer Tools.

Join us online this weekend at a DevFest near you. Sign up here.

Read More

Image archive, analysis, and report generation with Google APIs

Posted by Wesley Chun, Developer Advocate, Google Cloud

File backup isn’t the most exciting topic while analyzing images with AI/ML is more interesting, so combining them probably isn’t a workflow you think about often. However, by augmenting the former with the latter, you can build a more useful solution than without. Google provides a diverse array of developer tools you can use to realize this ambition, and in fact, you can craft such a workflow with Google Cloud products alone. More compellingly, the basic principle of mixing-and-matching Google technologies can be applied to many other challenges faced by you, your organization, or your customers.

The sample app presented uses Google Drive and Sheets plus Cloud Storage and Vision to make it happen. The use-case: Google Workspace (formerly G Suite) users who work in industries like architecture or advertising, where multimedia files are constantly generated. Every client job results in yet another Drive subfolder and collection of asset files. Successive projects lead to even more files and folders. At some point, your Drive becomes a “hot mess,” making users increasingly inefficient, requiring them to scroll endlessly to find what they’re looking for.

Image of a user and their google drive files

A user and their Google Drive files

How can Google Cloud help? Like Drive, Cloud Storage provides file (and generic blob) storage in the cloud. (More on the differences between Drive & Cloud Storage can be found in this video.)

Cloud Storage provides several storage classes depending on how often you expect to access your archived files. The less often files are accessed, the “colder” the storage, and the lower the cost. As users progress from one project to another, they’re not as likely to need older Drive folders and those make great candidates to backup to Cloud Storage.

First challenge: determine the security model. When working with Google Cloud APIs, you generally select OAuth client IDs to access data owned by users and service accounts for data owned by applications/projects. The former is typically used with Workspace APIs while the latter is the primary way to access Google Cloud APIs. Since we’re using APIs from both product groups, we need to make a decision (for now and change later if desired).

Since the goal is a simple proof-of-concept, user auth suffices. OAuth client IDs are standard for Drive & Sheets API access, and the Vision API only needs API keys so the more-secure OAuth client ID is more than enough. The only IAM permissions to acquire are for the user running the script to get write access to the destination Cloud Storage bucket. Lastly, Workspace APIs don’t have their own product client libraries (yet), so the lower-level Google APIs “platform” client libraries serve as a “lowest common denominator” to access all four REST APIs. Those who have written Cloud Storage or Vision code using the Cloud client libraries will see something different.

The prototype is a command-line script. In real life, it would likely be an application in the cloud, executing as a Cloud Function or a Cloud Task running as determined by Cloud Scheduler. In that case, it would use a service account with Workspace domain-wide delegation to act on behalf of an employee to backup their files. See this page in the documentation describing when you’d use this type of delegation and when not to.

Our simple prototype targets individual image files, but you can continue to evolve it to support multiple files, movies, folders, and ZIP archives if desired. Each function calls a different API, creating a “service pipeline” with which to process the images. The first pair of functions are drive_get_file() and gcs_blob_upload(). The former queries for the image on Drive, grabs pertinent metadata (filename, ID, MIMEtype, size), downloads the binary “blob” and returns all of that to the caller. The latter uploads the binary along with relevant metadata to Cloud Storage. The script was written in Python for brevity, but the client libraries support most popular languages. Below is the aforementioned function pseudocode:

def drive_get_file(fname):
rsp = DRIVE.files().list(q="name='%s'" % fname).execute().get['files'][0]
fileId, fname, mtype = rsp['id'], rsp['name'], rsp['mimeType']
blob = DRIVE.files().get_blob(fileId).execute()
return fname, mtype, rsp['modifiedTime'], blob

def gcs_blob_upload(fname, folder, bucket, blob, mimetype):
body = {'name': folder+'/'+fname, 'uploadType': 'multipart',
'contentType': mimetype}
return GCS.objects().insert(bucket, body, blob).execute()

Next, vision_label_img() passes the binary to the Vision API and formats the results. Finally that information along with the file’s archived Cloud Storage location are written as a single row of data in a Google Sheet via sheet_append_roww().

def vision_label_img(img):
body = {'requests': [{'image': {'content': img},
'features': [{'type': 'LABEL_DETECTION'}]}]}
rsp = VISION.images().annotate(body=body).execute().get['responses'][0]
return ', '.join('(%.2f%%) %s' % (label['score']*100.,
label['description']) for label in rsp['labelAnnotations'])

def sheet_append_row(sheet_id, row):
rsp = SHEETS.spreadsheets().values().append(spreadsheetId=sheet_id,
range='Sheet1', body={'values': row}).execute()
return rsp.get('updates').get('updatedCells')

Finally, a “main” program that drives the workflow is needed. It comes with a pair of utility functions, _k_ize() to turn file sizes into kilobytes and _linkify() to build a valid Cloud Storage hyperlink as a spreadsheet formula. These are featured here:

def _k_ize(nbytes):  # bytes to KBs (not KiBs) as str
return '%6.2fK' % (nbytes/1000.)


def _linkify(bucket, fname): # make GCS hyperlink to bucket/folder/file
tmpl = '=HYPERLINK("storage.cloud.google.com/{0}/{1}/{2}", "{2}")'
return tmpl.format(bucket, folder, fname)

def main(fname, bucket, SHEET_ID, folder):
fname, mtype, ftime, data = drive_get_img(fname)
gcs_blob_upload(fname, folder, bucket, data, mtype)
info = vision_label_img(data)
sheet_append_row(SHEET_ID, [folder, _linkify(bucket, fname), mtype,
ftime, _k_ize(data), info])

While this post may feature just pseudocode, a barebones working version can be accomplished with ~80 lines of actual Python. The rest of the code not shown are constants and other auxiliary support. The application gets kicked off with a call to main() passing in a filename, the Cloud Storage bucket to archive it to, a Drive file ID for the Sheet, and a “folder name,” e.g., a directory or ZIP archive. Running it several images results in a spreadsheet that looks like this:

Image archive report in Google Sheets

Image archive report in Google Sheets

Developers can build this application step-by-step with our “codelab” (free, online, self-paced tutorials) which can be found here. As you journey through this tutorial, its corresponding open source repo features separate folders for each step so you know what state your app should be in after every implemented function. (NOTE: Files are not deleted, so your users have to decide when to their cleanse Drive folders.) For backwards-compatibility, the script is implemented using older Python auth client libraries, but the repo has an “alt” folder featuring alternative versions of the final script that use service accounts, Google Cloud client libraries, and the newer Python auth client libraries.

Finally to save you some clicks, here are links to the API documentation pages for Google Drive, Cloud Storage, Cloud Vision, and Google Sheets. While this sample app deals with a constrained resource issue, we hope it inspires you to consider what’s possible with Google developer tools so you can build your own solutions to improve users’ lives every day!

Read More

Optimize your app publishing process with new Google Play Console features

Steve Suppe, Product Manager, Google Play

Publishing your app or game is one of the most important moments in your app’s lifecycle. You want everything to go smoothly, from making sure the production release is stable, to getting test releases out quickly, to getting your marketing message just right.

That’s why visibility is key. Knowing when your app is in review, when it’s been approved, and when it can go live on Google Play helps you set your own schedule.

Now, with two new features in the new Google Play Console, you can do just that. The Publishing overview page helps you better understand your publishing process and Managed publishing gives you better control of when your app updates go live on Google Play. When the new Play Console rolls out to everyone starting November 2, these features will be the recommended way to control your release timing, so let’s take a closer look.

Publishing overview

The new Publishing overview page displays all your recent changes to your releases, store listings, and more, including those that are currently being reviewed or processed by Google Play. For those of you with larger teams, this means you can now coordinate all your changes in one place and publish everything at the same time.

Unlike the developer activity log, the Publishing overview only shows changes that will be visible on Google Play, or what you’ve told us about how we should consider and review your app.

The “Changes in review” section lets you quickly see changes
that have not been published yet.

These changes are organized by the type of change or release track so it’s easy to understand at a glance.

Managed publishing

Many of you may be familiar with Timed publishing in the old Play Console. In the new Play Console, we’ve replaced Timed publishing with Managed publishing, to give you a clearer and more predictable publishing experience.


When you enable Managed publishing, approved changes will only go live when you decide instead of automatically after review and processing. This allows you to submit changes long before your intended release date, giving yourself time to review or make changes without sacrificing control over your publishing date.

See which changes have been reviewed and approved

When Managed publishing is on, the Publishing overview page contains two sections: one that shows which changes have been approved and are ready to publish, and another that shows changes that are still in review.

We’ve also made some improvements that many of you have been asking for:

  • You can now publish your approved changes even if other changes are still in review. Previously, Timed publishing did not allow you to make any changes live until all changes had been approved.
  • You can turn Managed publishing on or off at any time, even if there are changes in review or ready to publish. You no longer have to wait for pending reviews before you can use Managed publishing.

See if Managed publishing is turned in the left-hand navigation menu

Soon, you’ll be able to see the Managed publishing icon in the left-hand nav next to Publishing overview. This way, you can tell Managed publishing is on from anywhere in the Play Console.

To learn more about publishing with the new Play Console, including scenarios when these features would be most useful, check out this course from Play Academy. And if you haven’t already, update to the new Play Console at play.google.com/console and give Managed publishing a try.

How useful did you find this blog post?

Read More

Building for a more productive inbox with AMP

Posted by Jon Harmer, Product Manager, Google Workspace

With today being the start of AMP Fest, quite naturally, AMP is on our minds. One of the ways that AMP shines is through email. Now, ask any marketer about email and you’ll get answers ranging from its completely dead to its a preferred communications channel. The reality is that email is only as valuable as how it’s used. When used poorly, email just adds to the clutter of someone’s inbox with constant notifications and never-ending email chains. With AMP for Email, brands can change triggered emails from being just another notification, to an easy way for a user to always have realtime and relevant context.

Expanding the AMP Ecosystem

We’re excited to be partnering with Verizon Media and Salesforce Marketing Cloud to build for a future in which every message and touchpoint is an opportunity to make a delightful impression with rich, web-like experiences.

“The motivation to join the AMP for email project was simple: Allowing brands to send richer and more engaging emails to our users. This in turn creates a much better user experience and reduces friction. This also enables features and functionality right within the email environment which are on par with other native web or app experiences. It’s a perfect fit with our mission … to create the best consumer email experience.” said Nirmal Thangaraj, Engineer on Verizon Media Mail, which powers AOL and Yahoo! Mail.

Making things even easier for email senders, Salesforce announced at AMP Fest that early next year, senders will be able to send AMP emails from the Marketing Cloud. With Salesforce Marketing Cloud enabling AMP emails, senders can add one to two actionable steps into their emails and store that information back in Salesforce Marketing Cloud.

AMP for Productivity

Another area where AMP can really make an impact is in the office. With the influx of applications in the workplace, companies are using new SaaS applications to simplify individual processes – but it comes with a downside of complicating a workers day by requiring that employee to jump from app to app to get work done. With context aware content, that’s dynamically populated and updated in real-time, email can get away from being a tool that adds to employees frustrations and get back to being a tool where work actually happens.

Let’s take a look at a couple partners who have been building AMP emails, and how they’ve gone about implementing AMP as part of their email strategy.

Guru

Guru sends tens of thousands of notification emails each day, and while helpful, there were limitations to their effectiveness. Here’s Jason Maynard, Guru’s VP of Product on AMP:

“Static emails are helpful for giving a user awareness of a necessary task, but they also require that user to navigate away from their inbox to our web app in order to review knowledge cards and take specific actions. Their workflow is interrupted. Thus, we decided to leverage AMP in hopes of alleviating this user friction with a goal of fostering engagement within an email thread and reducing context switching”

And the process and results also were in Guru’s favor

“AMP’s predefined components, documented examples, and testing playgrounds were all development resources that enabled us to deploy AMP payloads very quickly

The new implementation has resulted in users now being able to interact with these notifications to a much greater extent. Users can now expand and read knowledge cards within their email thread. They can also complete actions such as card verifications and reply comments. Emails are now much more stateful and relevant to users.”

After deploying AMP, Guru saw a noticeable uptick in email-driven actions resulting in a 2.5x increase in the number of card comment actions and a 75% increase in card verification. These are thousands of new actions that helped teams manage their knowledge base, all without leaving their inbox.

VOGSY

Professional Services business, VOGSY, sends notification emails that have multiple conversion paths. Historically, these actions would take a day to complete. With AMP, they’ve seen an 80% improvement in completion speed. Reaching this success was a smooth and pleasant journey.

“Our developers and our users love AMP technology. Developers truly enjoy building engaging emails with personalized content that is securely and dynamically updated every time you open the email.User adoption is 100%. Completing a workflow can be done without leaving your inbox. That is a huge improvement in user experience. Because of its fast adoption, we expect to send more than 2 million AMP emails in the first year.” said Leo Koster, CEO of VOGSY.

Copper

Gif of Amp used by Copper

Copper is a CRM designed for people whose business relies on relationship-building, Copper functions seamlessly in the background while employees spend time on what matters: customers. Email is obviously a big part of how organizations communicate, plan, and collaborate. And up to now, email is mostly used as a gateway to other applications where users can take action or complete their task.

“This is why the idea of dynamic emails intrigued us… Supercharging the receivers’ experience to provide up to date information that you can interact with from your inbox. Instead of receiving static email notifications each time you are tagged, we leveraged AMP for email to give users a single, dynamic email where they can see relevant information about the opportunity. They can then respond to comments from their teammates—bringing our users the most seamless experience possible wherever they like to work.” said Product Manager, Sefunmi Osinaike.

And best of all, the process was simple.

“Our developers described the documentation as enjoyable because it helped us add rich components without the overhead of figuring out how to make them work in email with basic HTML. The ease of use of lists, inputs and tooltips accelerated the rate we prototyped our feature and it saved us a lot of time. We also got a ton of support on stack overflow with a response rate in less than 24 hours.”

For Copper, AMP has allowed them to take the experiences that always existed in Copper, but move them closer to the employee’s day-to-day workflow by allowing them to take those actions from email.

Stripo

As an email design platform, Stripo.email has seen over 1,000 different companies create AMP email campaigns with Carousels, Feedback Forms, and Net Promoter Score forms–in one month alone. Stripo was able to implement AMP to collect information in emails, so that users could fill out forms without having to leave their inbox. The strategy drove a 5x lift in effectiveness from traditional questionnaires.

We’re excited about AMP and all of the great use cases partners are implementing to modernize the capabilities of email. To learn more about AMP for email, click here and be sure to check out AMP Fest.

Read More

Android Studio 4.1

Posted by Scott Swarthout, Product Manager

Android Studio logo

Today, we’re excited to release the stable version of Android Studio 4.1, with a set of features addressing common editing, debugging, and optimization use cases. A major theme for this release was helping you be more productive while using Android Jetpack libraries, Android’s suite of libraries to help developers follow best practices and write code faster. Based on your feedback we made a number of improvements to the code editing experience with IDE integrations for popular Android libraries.

Some highlights of Android Studio 4.1 include a new Database Inspector for querying your app’s database, support for navigating projects that use Dagger or Hilt for dependency injection, and better support for on-device machine learning with support for TensorFlow Lite models in Android projects. We’ve also made updates to Apply Changes to make deployment faster. Based on your feedback, we’ve made several changes to help game developers with a new native memory profiler and standalone profiling tools.

Product quality continues to be a major focus for the team, and we’ve been hard at work tracking down bugs and performance issues. We’ve heard from many developers that they liked the focus on better performance and reliability, so we’re happy to report that during this release cycle we’ve fixed 2,370 bugs and closed 275 public issues. We stay committed to maintaining high quality since we know that is key to your developer productivity.

Thank you to those who gave your early feedback in preview releases. Your feedback helped us iterate and improve features in Android Studio 4.1. If you are ready for the next stable release, and want to use a new set of productivity features, Android Studio 4.1 is ready to download for you to get started.

Below is a full list of new features in Android Studio 4.1, organized by key developer flows.

Design

Material Design Components updates

Android Studio templates in the create New Project dialog now use Material Design Components (MDC) and conform to updated guidance for themes and styles by default. These changes will make it easier to use recommended material styling patterns and support modern UI features like dark themes.

Material Design Components updates

Material Design Components updates in Project Templates

Updates include:

  • MDC: Projects depend on com.google.android.material:material in build.gradle. Base app themes use Theme.MaterialComponents.* parents and override updated MDC color and “on” attributes.
  • Color resources: Color resources in colors.xml use literal names (for example, purple_500 instead of colorPrimary).
  • Theme resources: Theme resources are in themes.xml (instead of styles.xml) and use Theme.<ApplicationName> names.
  • Dark theme: Base application themes use DayNight parents and are split between res/values and res/values-night.
  • Theme attributes: Color resources are referenced as theme attributes (for example, ?attr/colorPrimary) in layouts and styles to avoid hard-coded colors.

Develop

Database Inspector

We wanted to make it easier to inspect, query, and modify your app’s databases using the new Database Inspector. To get started, deploy your app to a device running API level 26 or higher and select View > Tool Windows > Database Inspector from the menu bar. Whether your app uses the Jetpack Room library or the Android platform version of SQLite directly, you can now easily inspect databases and tables in your running app or run custom queries.

Because Android Studio maintains a live connection while you’re inspecting your app, you can also modify values using the Database Inspector and see those changes in your running app. If you use the Room persistence library, Android Studio also places run buttons next to each query in the code editor to help you quickly run queries you define in your @Query annotations. Learn more

Database inspector

Inspect, query, and modify your app’s databases with the Database Inspector

Run Android Emulator directly in Android Studio

You can now run the Android Emulator directly in Android Studio. Use this feature to conserve screen real estate, to navigate quickly between the emulator and the editor window using hotkeys, and to organize your IDE and emulator workflow in a single application window. You can manage snapshots and common emulator actions like rotating and taking screenshots from within Studio, but access to the full set of options still requires running the stable emulator. You can opt-in to use this feature by going to File → SettingsToolsEmulator Launch in Tool Window.

Android Emulator in Android Studio

Run the Android Emulator inside of Android Studio

Dagger Navigation Support

Dagger is a popular library for dependency injection on Android. Android Studio makes it easier to navigate between your Dagger-related code by providing new gutter actions and extending support in the Find Usages window. For example, clicking on the go to producer gutter action gutter action next to a method that consumes a given type navigates you to the provider of that type. Conversely, clicking on the go to consumer gutter action gutter action navigates you to where a type is used as a dependency. Android Studio also supports navigation actions for dependencies defined with the Jetpack Hilt library. Learn more.

Gutter actions navigation in Android Studio

Navigate between Dagger-related code with gutter actions

Use TensorFlow Lite models

Android developers are using machine learning to create innovative and helpful experiences. TensorFlow Lite is a popular library for writing mobile machine learning models, and we wanted to make it easier to import these models into Android apps. Similar to view binding, Android Studio generates easy-to-use classes so you can run your model with less code and better type safety. The current implementation of ML Model Binding supports image classification and style transfer models, provided they are enhanced with metadata.

To see the details for an imported model and get instructions on how to use it in your app, double-click the .tflite model file in your project to open the model viewer page. Learn more.

TensorFlow Lite in Android Studio 4.1

View TensorFlow Lite model metadata in Android Studio 4.1

Build & Test

Android Emulator – Foldable Hinge Support


Android Studio

In addition to recently adding 5G cellular testing support, we’ve added support for foldables in the Android emulator. With Android emulator 30.0.26 and above, you can configure foldable devices with a variety of fold designs and configurations. When a foldable device is configured, the emulator will publish hinge angle sensor updates and posture changes, so you can test how your app responds to these form factors. See the Developing for Android 11 with the Android Emulator blogpost to read more.

Extended controls, device pose

Apply Changes updates

Faster builds help developers make changes to their app more easily and quickly. To help you be more productive as you iterate on your app, we’ve made multiple enhancements to Apply Changes for devices running Android 11 or higher.

We’ve invested heavily in optimizing your iteration speed by developing a method to deploy and persist changes on a device without installing the application. After an initial deploy, subsequent deploys to Android 11 devices using either Apply Code Changes or Apply Changes and Restart Activity are now significantly faster. We’ve also added support for additional code changes in Apply Changes. Now if you add a method, you can deploy those changes to a running app by clicking either Apply Code Changes or Apply Changes and Restart Activity.

Export C/C++ dependencies from AARs

Android Gradle Plugin 4.0 added the ability to import Prefab packages in AAR dependencies. We wanted to extend the capability of this feature to support sharing native libraries as well. AGP version 4.1 enables exporting libraries from your external native build in an AAR for an Android Library project. To export your native libraries, add the following to the android block of your library project’s build.gradle file:

buildFeatures {
    prefabPublishing true
}

prefab {
    mylibrary {
      headers "src/main/cpp/mylibrary/include"
    }

    myotherlibrary {
        headers "src/main/cpp/myotherlibrary/include"
    }
}

Symbolication for native crash reports

When a crash or ANR occurs in native code, the system produces a stack trace, which is a snapshot of the sequence of nested functions called in your program up to the moment it crashed. These snapshots can help you to identify and fix any problems in the source, but they must first be symbolicated to translate the machine addresses back into human-readable function names.

If your app or game is developed using native code, like C++, you can now upload debug symbols files to the Play Console for each version of your app. The Play Console uses these debug symbols files to symbolicate your app’s stack traces, making it easier to analyze crashes and ANRs. To include debug symbols in your app bundle, add the following line to your project’s build.gradle file:

android.buildTypes.release.ndk.debugSymbolLevel = 'SYMBOL_TABLE'

Optimize

System Trace UI improvements

In Android Studio 4.1 we’ve overhauled System Trace, an optimization tool that gives you a real-time look at how your app is using system resources. We made it easier to select a trace with box selection mode, added a new analysis tab, and added more frame rendering data to help you investigate rendering issues in your app’s UI. Learn more.

Box selection: In the Threads section, you can now drag your mouse to perform a box selection of a rectangular area, which you can zoom into by clicking the Zoom to Selection button on the top right (or use the M keyboard shortcut). When you drag and drop similar threads next to each other, you can select across multiple threads to inspect all of them at once.

Use box selection to more easily select traces.

Trace selection

Summary tab: The new Summary tab in the Analysis panel displays:

  • Aggregate statistics for all occurrences of a specific event, such as an occurrence count and min/max duration.
  • Trace event statistics for the selected occurrence.
  • Data about thread state distribution.
  • Longest-running occurrences of the selected trace event.
View aggregated statistics in Summary tab of Android Studio 4.1

View aggregated statistics in the Summary tab

Display data: In the Display section, new timelines for SurfaceFlinger and VSYNC help you investigate rendering issues in your app’s UI.

Standalone profilers

It’s now possible to access the Android Studio Profilers in a separate window from the primary Android Studio window. This is useful when optimizing Android games built with other tools like Unity or Visual Studio.

To run the standalone profilers, do the following:

  1. Make sure the profilers in Android Studio are not already running on your system.
  2. Go to the installation directory and navigate to the bin directory:

Windows/Linux: <studio-installation-folder>bin

macOS: <studio-installation-folder>/Contents/bin

  1. Depending on your OS, run profiler.exe or profiler.sh

The standalone profiler will allow you to connect to the Android emulator or any connected devices.

Standalone Android Studio profiler

Optimize your app with the Standalone Android Studio Profilers

Native Memory Profiler

Tracking native memory usage is important for game developers and other developers using C++ to understand how to optimize their app’s memory consumption. The Android Studio Memory Profiler now includes a Native Memory Profiler for apps deployed to physical devices running Android 10 or later. The Native Memory Profiler tracks allocations/deallocations of objects in native code for a specific time period and provides information about total allocations and remaining system heap size.

To initiate a recording, click Record native allocations at the top of the Memory Profiler window:

Native Memory Profiler window in Android Studio 4.1

View native memory allocations with the Native Memory Profiler

To recap, Android Studio 4.1 includes these new enhancements & features:

Design

  • Material Design Components updates

Develop

  • Database Inspector
  • Run Android Emulator directly in Android Studio
  • Dagger navigation support
  • Use TensorFlow Lite models

Build & Test

  • Android Emulator – Foldable Hinge Support
  • Apply Changes updates
  • Export C/C++ dependencies from AARs
  • Symbolification for native crash reports

Optimize

  • System Trace UI Improvements
  • Standalone profilers
  • Native Memory Profiler

These materials are not sponsored by or affiliated with Unity Technologies or its affiliates. “Unity” is a trademark or registered trademark of Unity Technologies or its affiliates in the U.S. and elsewhere.

Read More

Top brands integrate Google Assistant with new tools and features for Android apps and Smart Displays

Posted by Baris Gultekin and Payam Shodjai, Directors of Product Management

Top brands turn to Google Assistant every day to help their users get things done on their phones and on Smart Displays — such as playing games, finding recipes or checking investments, just by using their voice. In fact, over the last year, the number of Actions completed by third-party developers has more than doubled.

We want to support our developer ecosystem as they continue building the best experiences for smart displays and Android phones. That’s why today at Google Assistant Developer Day, we introduced:

  • New App Actions built in intents — to enable Android developers easily integrate Google Assistant with their apps,
  • New discovery features such as suggestions and shortcuts — to enable users easily discover and engage with Android apps
  • New developer tools and features, such as testing API, voices and frameworks for game development — to help build high quality nativel experiences for smart displays
  • New discovery and monetization improvements — to help users discover and engage with developers’ experiences on Assistant.

Now, all Android Developers can bring Google Assistant to their apps

Now, every Android app developer can make it easier for their users to find what they’re looking for, by fast forwarding them into the app’s key functionality, using just voice. With App Actions, top app developers such as Yahoo Mail, Fandango, and ColorNote, are currently creating these natural and engaging experiences for users by mapping their users’ intents to specific functionality within their apps. Instead of having to navigate through each app to get tasks done, users can simply say “Hey Google” and the outcome they want – such as “find Motivation Mix on Spotify” using just their voice.

Here are a few updates we’re introducing today to App Actions.

Quickly open and search within apps with common intents

Every day, people ask Google Assistant to open their favorite apps. Today, we are building on this functionality to open specific pages within apps and also search within apps. Starting today, you can use the GET_THING intent to search within apps and the OPEN_APP_FEATURE intent to open specific pages in apps; offering more ways to easily connect users to your app through Assistant.

Many top brands such as eBay and Kroger are already using these intents. If you have the eBay app on your Android phone, try saying “Hey Google, find baseball cards on eBay” to try the GET_THING intent.

If you have the Kroger app on your Android phone, try saying “Hey Google, open Kroger pay” to try the OPEN_APP_FEATURE intent.

It’s easy to implement all these common intents to your Android apps. You can simply declare support for these capabilities in your Actions.xml file to get started. For searching, you can provide a deep link that will allow Assistant to pass a search term into your app. For opening pages, you can provide a deep link with the corresponding name for Assistant to match users’ requests.

Vertical specific built-in intents

For a deeper integration, we offer vertical-specific built-in intents (BII) that lets Google take care of all the Natural Language Understanding (NLU) so you don’t have to. We first piloted App Actions in some of the most popular app verticals such as Finance, Ridesharing, Food Ordering, and Fitness. Today, we are announcing that we have now grown our catalog to cover more than 60 intents across 10 verticals, adding new categories like Social, Games, Travel & Local, Productivity, Shopping and Communications

For example, Twitter and Wayfair have already implemented these vertical built in intents. So, if you have the Twitter app on your Android phone, try saying “Hey Google, post a Tweet” to see a Social vertical BII in action.

If you have the Wayfair app on your Android phone, try saying “Hey Google, buy accent chairs on Wayfair” to see a Shopping vertical BII in action.

Check out how you can get started with these built-in intents or explore creating custom intents today.

Custom Intents to highlight unique app experiences

Every app is unique with its own features and capabilities, which may not match the list of available App Actions built-in intents. For cases where there isn’t a built-in intent for your app functionality, you can instead create a custom intent.Like BIIs, custom intents follow the actions.xml schema and act as connection points between Assistant and your defined fulfillments.

Snapchat and Walmart use custom intents to extend their app’s functionality to Google Assistant. For example, if you have the Snapchat app on your Android phone, just say, “Hey Google, send a Snap using the cartoon face lens” to try their Custom Intent.

Or, If you have the Walmart app on your Android phone, just say, “Hey Google, reserve a time slot with Walmart” to schedule your next grocery pickup.

With more common, built-in, and custom intents available, every Android developer can now enable their app to fulfill Assistant queries that tailor to exactly what their app offers. Developers can also use known developer tools such as Android Studio, and with just a few days of work, they can easily integrate their Android apps with the Google Assistant.

Suggestions and Shortcuts for improving user discoverability

We are excited about these new improvements to App Actions, but we also understand that it’s equally important that people are able to discover your App Actions. We’re designing new touch points to help users easily learn about Android apps that support App Actions. For example, we’ll be recommending relevant Apps Actions even when the user doesn’t mention the app name explicitly by showing suggestions. If you say broadly “Hey Google, show me Taylor Swift”, we’ll highlight a suggestion chip that will guide the user to open up the search result in Twitter. Google Assistant will also be suggesting apps proactively, depending on individual app usage patterns.

Android users will also be able to customize their experience, creating their own way to automate their most common tasks with app shortcuts, enabling people to set up quick phrases to enable app functions they frequently use. For example, you can create a MyFitnessPal shortcut to easily track their calories throughout the day and customize the query to say what you want – such as “Hey Google, check my calories.”

By simply saying “Hey Google, shortcuts”, they can set up and explore suggested shortcuts in the settings screen. We’ll also make proactive suggestions for shortcuts throughout the Assistant mobile experience, tailored to how you use your phone.

Build high quality conversational Actions for Smart Displays

Back in June, we launched new developer tools such as Actions Builder and Actions SDK, making it easier to design and build conversational Actions on Assistant, like games, for Smart Displays. Many partners have already been building with these, such as Cool Games and Sony. We’re excited to share new updates that not only enable developers to build more, higher quality native Assistant experiences with new game development frameworks and better testing tools, but we’ve also made user discovery of those experiences better than ever.

New developer tools and features

Improved voices

We’ve heard your feedback that you need better voices to match the quality of the experiences you’re delivering on the Assistant. We’ve released two new English voices that take advantage of an improved prosody model to make Assistant sound more natural. Give it a listen.

These voices are now available and you can leverage them in your existing Actions by simply making the change in the Actions Console.

Interactive Canvas expansion

But what can you build with these new voices? Last year, we introduced Interactive Canvas, an API that lets you build custom experiences for the Assistant that can be controlled via both touch and voice using simple technologies like HTML, CSS, and Javascript.

We’re expanding Interactive Canvas to Actions in the education and storytelling verticals; in addition to games. Whether you’re building an action that teaches someone to cook, explains the phases of the moon, helps a family member with grammar, or takes you through an interactive adventure, you’ll have access to the full visual power of Interactive Canvas.

Improved testing to deliver high quality experiences

Actions Testing API is a new programmatic way to test your critical user journeys and ensure there aren’t any broken conversation paths. Using this framework allows you to run end to end tests in an isolated preview environment, run regression tests, and add continuous testing to your arsenal. This API is being released to general availability soon.

New Dialogflow migration tool

For those of you who built experiences using Dialogflow, we want you to enjoy the benefits of the new platform without having to build from scratch. That’s why we’re offering a migration tool inside the Actions Console that automates much of the work to move projects to the improved platform.

New site for game developers

Game developers, we built a new resource hub just for you. Boost your game design expertise with full source code to games, design best practices, interviews with game developers, tools, and everything you need to create voice-enabled games for Smart Displays.

Discovery

With more incredible experiences being built, we know it can be challenging to help users discover them and drive engagement. To make it easier for people to discover and engage with your experiences, we have invested in a slew of new discovery features:

New Built-in intents and the Learning Hub

We’ll soon be opening two new set Built-in intents (BIIs) for public registration: Education and Storytelling. Registering your Actions for these intents allows users to discover them in a simple, natural way through general requests to Google Assistant. These new BIIs cover a range of intents in the Education and Storytelling domains and join Games as principal areas of investment for the developer ecosystem.

People will then be able to say “Hey Google, teach me something new” and they will be presented with a Learning Hub where they can browse different education experiences. For stories, users can simply say “Hey Google, tell me a story”. Developers can soon register for both new BIIs to get their experiences listed in these browsable catalog.

Household Authentication token and improving transactions

One of the exciting things about the Smart Display is that it’s an inherently communal device. So if you’re offering an experience that is meant to be enjoyed collaboratively, you need a way to share state between household members and between multiple devices. Let’s say you’re working on a puzzle and your roommate wants to help with a few pieces on the Smart Display. We’re introducing household authentication tokens so all users in a home can now share these types of experiences. This feature will be available soon via the Actions console.

Finally, we’re making improvements to the transaction flow on Smart Displays. We want to make it easier for you to add seamless voice-based and display-based monetization capabilities to your experience. We’ve started by supporting voice-match as an option for payment authorization. And early next year, we’ll also launch an on-display CVC entry.

Simplifying account linking and authentication

Once you build personalized and premium experiences, you need to make it as easy as possible to connect with existing accounts. To help streamline this process, we’re opening two betas: Link with Google and App Flip, for improved account linking flows to allow simple, streamlined authentication via apps.

Link with Google enables anyone with an Android or iOS app where they are already logged in to complete the linking flow with just a few clicks, without needing to re-enter credentials.

App Flip helps you build a better mobile account linking experience and decrease drop-off rates. App Flip allows your users to seamlessly link their accounts to Google without having to re-enter their credentials.

Assistant links

In addition to launching new channels of discovery for developer Actions, we also want to provide more control over how you and your users reach your Actions. Action links were a way to deep link to your conversational action that has been used with great success by partners like Sushiro, Caixa, and Giallo Zafferano.…

Fernanda Viégas puts people at the heart of AI

When Fernanda Viégas was in college, it took three years with three different majors before she decided she wanted to study graphic design and art history. And even then, she couldn’t have imagined the job she has today: building artificial intelligence and machine learning with fairness and transparency in mind to help people in their daily lives.  

Today Fernanda, who grew up in Rio de Janeiro, Brazil, is a senior researcher at Google. She’s based in London, where she co-leads the global People + AI Research (PAIR) Initiative, which she co-founded with fellow senior research scientist Martin M. Wattenberg and Senior UX Researcher Jess Holbrook, and the Big Picture team. She and her colleagues make sure people at Google think about fairness and values–and putting Google’s AI Principlesinto practice–when they work on artificial intelligence. Her team recently launched a seriesof “AI Explorables,”a collection of interactive articles to better explain machine learning to everyone. 

When she’s not looking into the big questions around emerging technology, she’s also an artist, known for her artistic collaborations with Wattenberg. Their data visualization art is a part of the permanent collection of the Museum of Modern Art in New York.  

I recently sat down with Fernanda via Google Meet to talk about her role and the importance of putting people first when it comes to AI. 

How would you explain your job to someone who isn’t in tech?

As a research scientist, I try to make sure that machine learning (ML) systems can be better understood by people, to help people have the right level of trust in these systems. One of the main ways in which our work makes its way to the public is through the People + AI Guidebook, a set of principles and guidelines for user experience (UX) designers, product managers and engineering teams to create products that are easier to understand from a user’s perspective.

What is a key challenge that you’re focusing on in your research? 

My team builds data visualization tools that help people building AI systems to consider issues like fairness proactively, so that their products can work better for more people. Here’s a generic example: Let’s imagine it’s time for your coffee break and you use an app that uses machine learning for recommendations of coffee places near you at that moment. Your coffee app provides 10 recommendations for cafes in your area, and they’re all well-rated. From an accuracy perspective, the app performed its job: It offered information on a certain number of cafes near you. But it didn’t account for unintended unfair bias. For example: Did you get recommendations only for large businesses? Did the recommendations include only chain coffee shops? Or did they also include small, locally owned shops? How about places with international styles of coffee that might be nearby? 

The tools our team makes help ensure that the recommendations people get aren’t unfairly biased. By making these biases easy to spot with engaging visualizations of the data, we can help identify what might be improved. 

What inspired you to join Google? 

It’s so interesting to consider this because my story comes out of repeated failures, actually! When I was a student in Brazil, where I was born and grew up, I failed repeatedly in figuring out what I wanted to do. After spending three years studying for different things—chemical engineering, linguistics, education—someone said to me, “You should try to get a scholarship to go to the U.S.” I asked them why I should leave my country to study somewhere when I wasn’t even sure of my major. “That’s the thing,” they said. “In the U.S. you can be undecided and change majors.” I loved it! 

So I went to the U.S. and by the time I was graduating, I decided I loved design but I didn’t want to be a traditional graphic designer for the rest of my life. That’s when I heard about the Media Lab at MIT and ended up doing a master’s degree and PhD in data visualization there. That’s what led me to IBM, where I met Martin M. Wattenberg. Martin has been my working partner for 15 years now; we created a startup after IBM and then Google hired us. In joining, I knew it was our chance to work on products that have the possibility of affecting the world and regular people at scale. 

Two years ago, we shared our seven AI Principles to guide our work. How do you apply them to your everyday research?

One recent example is from our work with the Google Flights team. They offered users alerts about the “right time to buy tickets,” but users were asking themselves, Hmm, how do I trust this alert?  So the designers used our PAIR Guidebook to underscore the importance of AI explainability in their discussions with the engineering team. Together, they redesigned the feature to show users how the price for a flight has changed over the past few months and notify them when prices may go up or won’t get any lower. When it launched, people saw our price history graph and responded very well to it. By using our PAIR Guidebook, the team learned that how you explain your technology can significantly shape the user’s trust in your system. 

Historically, ML has been evaluated along the lines of mathematical metrics for accuracy—but that’s not enough. Once systems touch real lives, there’s so much more you have to think about, such as fairness, transparency, bias and explainability—making sure people understand why an algorithm does what it does. These are the challenges that inspire me to stay at Google after more than 10 years. 

What’s been one of the most rewarding moments of your career?

Whenever we talk to students and there are women and minorities who are excited about working in tech, that’s incredibly inspiring to me. I want them to know they belong in tech, they have a place here. 

Also, working with my team on a Google Doodle about the composer Johann Sebastian Bach last year was so rewarding. It was the very first time Google used AI for a Doodle and it was thrilling to tell my family in Brazil, look, there’s an AI Doodle that uses our tech! 

How should aspiring AI thinkers and future technologists prepare for a career in this field? 

Try to be deep in your field of interest. If it’s AI, there are so many different aspects to this technology, so try to make sure you learn about them. AI isn’t just about technology. It’s always useful to be looking at the applications of the technology, how it impacts real people in real situations.

Read More