Join us for #30DaysOfFlutter

Posted by Nikita Gandhi (Community Manager, GDG India), Nilay Yener (Program Manager, Flutter DevRel)

Happy New Year folks. It’s the perfect time of year to learn something new! Do you have an app idea you’ve been dreaming of over the holidays? If so, we have just the opportunity for you! Starting February 1st, leading up to our big event on March 3rd, join us for #30DaysOfFlutter to kickstart your learning journey and meet Flutter experts in the community. Whether you are building your first Flutter app or looking to improve your Flutter skills, we have curated content, code labs, and demos!

Flutter is Google’s open source UI toolkit for building beautiful, natively compiled applications for mobile, web, and desktop from a single codebase. It’s one of the fastest growing, most in-demand cross platform frameworks to learn and is used by freelance developers and large organizations around the world. Flutter uses the Dart language, so it will feel natural to many of you familiar with object-oriented languages.

Jump in, the water’s fine!

Along with the curated content, we will also have four live AskMeAnything sessions (#AMAs), where you can meet members of Google’s Flutter team and community. You can also join us on the FlutterDev Discord channel, where you can meet the other members of the community, ask and answer questions, and maybe make some new Flutter friends too!

Does this sound exciting? Visit the 30 Days of Flutter website to get more information and to register to join.

#30DaysOfFlutter Schedule

Your learning journey with Flutter for the month will look like this::

Week 1

Receive curated content to your inbox. Meet other Flutter Devs on Discord. Attend Kick Off Webinar on February 1st.

Week 2

Receive more content. Start building your first Flutter app. Join the webinar and ask your questions.

Week 3

Work on your app and attend the 3rd webinar to ask your questions.

Week 4

Complete your project and learn how to share it with the Flutter community.

Are you ready to learn one of the most in demand developer skills in the world?

Sign up to be a part of the journey and be sure to follow @FlutterDev on Twitter, to get updates about #30DaysOfFlutter.

Read More

MAD Skills Kotlin and Jetpack: wrap-up

Posted by Florina Muntenescu, Developer Relations Engineer

Kotlin and Jetpack image

We just wrapped up another series of MAD Skills videos and articles – this time on Kotlin and Jetpack. We covered different ways in which we made Android code more expressive and concise, safer, and easy to run asynchronous code with Kotlin.

Check out the episodes below to level up your Kotlin and Jetpack knowledge! Each episode covers a specific set of APIs, talking both about how to use the APIs but also showing how APIs work under the hood. All the episodes have accompanying blog posts and most of them link to either a sample or a codelab to make it easier to follow and dig deeper into the content. We also had a live Q&A featuring Jetpack and Kotlin engineers.

Episode 1 – Using KTX libraries

In this episode we looked at how you can make your Android and Jetpack coding easy, pleasant and Kotlin-idiomatic with Jetpack KTX extensions. Currently, more than 20 libraries have a KTX version. This episode covers some of the most important ones: core-ktx that provides idiomatic Kotlin functionality for APIs coming from the Android platform, plus a few Jetpack KTX libraries that allow us to have a better user experience when working with APIs like LiveData and ViewModel.

Check out the video or the article:

Episode 2 – Simplifying APIs with coroutines and Flow

Episode 2, covers how to simplify APIs using coroutines and Flow as well as how to build your own adapter using suspendCancellableCoroutine and callbackFlow APIs. To get hands-on with this topic, check out the Building a Kotlin extensions library codelab.

Watch the video or read the article:

Episode 3 – Using and testing Room Kotlin APIs

This episode opens the door to Room, peeking in to see how to create Room tables and databases in Kotlin and how to implement one-shot suspend operations like insert, and observable queries using Flow. When using coroutines and Flow, Room moves all the database operations onto the background thread for you. Check out the video or blog post to find out how to implement and test Room queries. For more hands-on work – check out the Room with a view codelab.

Episode 4 – Using WorkManager Kotlin APIs

Episode 4 makes your job easier with WorkManager, for scheduling asynchronous tasks for immediate or deferred execution that are expected to run even if the app is closed or the device restarts. In this episode we go over the basics of WorkManager and look a bit more in depth at the Kotlin APIs, like CoroutineWorker.

Find the video here and the article here, but nothing compares to practical experience so go through the WorkManager codelab.

Episode 5 – Community tip

Episode 5 is by Magda Miu – a Google Developer Expert on Android who shared her experience of leveraging foundational Kotlin APIs with CameraX. Check it out here:

Episode 6 – Live Q&A

In the final episode we launched into a live Q&A, hosted by Chet Haase, with guests Yigit Boyar – Architecture Components tech lead, David Winer – Kotlin product manager, and developer relations engineers Manuel Vivo and myself. We answered questions from you on YouTube, Twitter and elsewhere.

Read More

Community leaders upskill themselves and find new roles with Elevate by Google Developers

Posted by Kübra Zengin, GDG North America Regional Lead

Image of participants in a recent Elevate workshop.

The North America Developer Ecosystem team recently hosted Elevate for Google Developer Groups organizers and Women Techmakers Ambassadors in US & Canada. The three-month professional development program met every Wednesday via Google Meet to help tech professionals upskill themselves with workshops on leadership, communication, thinking, and teamwork.

The first cohort of the seminar-style program recently came to a close, with 40+ Google Developer Groups organizers and Women Techmakers Ambassadors participating. Additionally, 18 guest speakers – 89% of whom were underrepresented genders – hosted specialized learning sessions over three months of events.

Elevate is just one example of the specialized applied skills training available to the Google Developer Groups community. As we look ahead to offering Elevate again in 2021, we wanted to share with you some of the key takeaways from the first installment of the program.

What the graduates had to say

From landing new roles at companies like Twitter and Accenture, to negotiating salary raises, the 40 graduates of Elevate have seen many successes. Here’s what a few of them had to say:

“I got a role at Accenture as a software engineer because I used the learnings from Elevate when applying and interviewing for the job. I can’t thank the Google team enough!”

“The interactive workshops truly helped me land my new job at Twitter.”

“After the Elevate trainings on negotiation, I successfully secured a higher salary with my new employer.”

Whether it’s finding new jobs or moving to new countries, Elevate’s graduates have used their new skills to guide their careers towards their passions. Check out a few of the program’s key lessons below:

Bringing your best self to the table

One major focus of the program was to help community leaders develop their own professional identity and confidence by learning communication techniques that would help them stand out and define themselves in the workplace.

Entire learning sessions were dedicated to specific value-adding topics, including:

  • How to use persuasive body language;
  • Finding a networking, presenting, and storytelling voice;
  • The best practices for salary negotiation.

Along with other sessions on growth mindsets, problem solving, and more, attendees gained a deeper understanding of the best ways to present themselves, their ideas, and their worth in a professional setting – an essential ability that many feel has already helped them navigate job markets with more precision.

A team that feels valued brings value

“Who is on a team matters less than how the team members interact, structure their work, and view their contributions.”

The advice above, offered by a guest speaker during a teambuilding session, was one of the quotes that resonated with participants the most during the program. The emphasis on how coworkers think of each other and the best ways to build a culture of ownership over a team’s wins and losses embodies the key learnings central to Elevate’s mission.

The program further emphasized this message with learning sessions on:

  • Giving and accepting clear feedback;
  • Bias busting and empathy training in the workplace;
  • Conflict management and resolution.

With these trainings, paired with others on growth mindsets and decision making, Elevate’s participants were able to start analyzing the effectiveness of different work environments on productivity. Through breakout sessions, they quickly realized that the more secure and supported an employee feels, the more willing they are to go the extra mile for their team. Equipped with this new knowledge base, many participants have already started bringing these key takeaways to their own workplaces in an effort to build more inclusive and productive cultures.

Whether it’s finding a new role or improving your applied skills, we can’t wait to see how Google Developer programs can help members achieve their professional goals.

For similar opportunities, find out how to join a Google Developer Group near you, here. And register for upcoming applied skills trainings on the Elevate website, here.

Read More

Solve for the United Nations’ Sustainable Development Goals with Google technologies in this year’s Solution Challenge.

Posted by Erica Hanson, Global Program Manager, Google Developer Student Clubs

Solution Challenge image

Created by the United Nations in 2015 to be achieved by 2030, the 17 Sustainable Development Goals (SDGs) agreed upon by all 193 United Nations Member States aim to end poverty, ensure prosperity, and protect the planet.

Last year brought many challenges, but it also brought a greater spirit around helping each other and giving back to our communities. With that in mind, we invite students around the world to join the Google Developer Student Clubs 2021 Solution Challenge!

If you’re new to the Solution Challenge, it is an annual competition that invites university students to develop solutions for real world problems using one or more Google products or platforms.

This year, see how you can use Android, TensorFlow, Google Cloud, Flutter, or any of your favorite Google technologies to promote employment for all, economic growth, and climate action, by building a solution for one or more of the UN Sustainable Development Goals.

What winners of the Solution Challenge receive

Participants will receive specialized prizes at different stages:

  1. The Top 50 teams will receive mentorship from Google and other experts to further work on their projects.
  2. The Top 10 finalists will receive a 1-year subscription to Pluralsight, swag, additional customized mentoring from Google, and a feature in the Google Developers Blog and Demo Day live on YouTube.
  3. The 3 Grand Prize Winners will receive all the prizes included in the Top 10 category along with a Chromebook and a private team meeting with a Google executive.

How to get started on the Solution Challenge

There are four main steps to joining the Solution Challenge and getting started on your project:

  1. Register at goo.gle/solutionchallenge and join a Google Developer Student Club at your college or university. If there is no club at your university, you can join the closest one through the event platform.
  2. Select one or more of the United Nations 17 Sustainable Development Goals to solve for.
  3. Build a solution using Google technology.
  4. Create a demo and submit your project by March 31, 2021.

Resources from Google for Solution Challenge participants

Google will provide Solution Challenge participants with various resources to help students build strong projects for their contest submission.

  • Live online sessions with Q&As
  • Mentorship from Google, Google Developer Experts, and the Developer Student Club community
  • Curated codelabs designed by Google Developers
  • Access to Design Sprint guidelines developed by Google Ventures
  • and more!

When are winners announced?

Once all the projects are submitted after the March 31st deadline, judges will evaluate and score each submission from around the world using the criteria listed on the website. From there, winning solutions will be announced in three rounds.

Round 1 (May): The Top 50 teams will be announced.

Round 2 (July): After the top 50 teams submit their new and improved solutions, 10 finalists will be announced.

Round 3 (August): In the finale, the top 3 grand prize winners will be announced live on YouTube during the 2021 Solution Challenge Demo Day.

With a passion for building a better world, savvy coding skills, and a little help from Google, we can’t wait to see the solutions students create.

Learn more and sign up for the 2021 Solution Challenge, here.

Read More

Announcing New Smart Home App Discovery Features

Posted by Toni Klopfenstein, Developer Advocate

When a user connects a smart device to the Google Assistant via the Home app, the user must select the appropriate related Action from the list of all available Actions. The user then clicks through multiple screens to complete their device setup. Today, we’re releasing two new features to improve this device discovery process and drive customer adoption of your Smart Home Action through the Google Home app. App Discovery and Deep Linking are two convenience features that help users find your Google-Assistant compatible smart devices quickly and onboard faster.

App Discovery enables users to quickly find your smart home Action thanks to suggestion chips within the Google Home app. You can implement this new feature through the Actions Console by creating a verified brand link between your Action, your website, and your mobile app. App Discovery doesn’t require any coding work to implement, making this a development-light feature that provides great improvements to the user experience of device linking.

In addition to helping users discover your Action directly through suggestion chips, Deep Linking enables you to guide users to your account linking flow within the Google Home app in one step. These deep links are easily added to your mobile app or web content, guiding users to your smart home integration with a single tap.

Deep Linking and App Discovery can help you create a more streamlined onboarding experience for your users, driving increased engagement and user satisfaction, and can be implemented with minimal engineering work.

To implement App Discovery and Deep Linking for your Smart Home Action, check out the developer documents, or watch the video covering these new features.

You can also check out the smart home codelabs if you are just starting to build out your Action.

We want to hear from you, so continue sharing your feedback with us through the issue tracker, and engage with other smart home developers in the /r/GoogleAssistantDev community. Follow @ActionsOnGoogle on Twitter for more of our team’s updates, and tweet using #AoGDevs to share what you’re working on. We can’t wait to see what you build!

Read More

A Google for Startups Accelerator for startups using voice technology to better the world

Posted by Jason Scott, Head of Startup Developer Ecosystem, U.S., Google

At Google, we have long understood that voice user interfaces can help millions of people accomplish their goals more effectively. Our journey in voice began in 2008 with Voice Search — with notable milestones since, such as building our first deep neural network in 2012, our first sequence-to-sequence network in 2015, launching Google Assistant in 2016, and processing speech fully on device in 2019. These building blocks have enabled the unique voice experiences across Google products that our users rely on everyday.

Voice AI startups play a key role in helping build and deliver innovative voice-enabled experiences to users. And, Google is committed to helping tech startups deliver high impact solutions in the voice space. This month, we are excited to announce the Google for Startups Accelerator: Voice AI program, which will bring together the best of Google’s programs, products, people and technology with a joint mission to advance and support the most promising voice-enabled AI startups across North America.

As part of this Google for Startups Accelerator, selected startups will be paired with experts to help tackle the top technical challenges facing their startup. With an emphasis on product development and machine learning, founders will connect with voice technology and AI/ML experts from across Google to take their innovative solutions to the next level.

We are proud to launch our first ever Google for Startups Accelerator: Voice AI — building upon Google’s longstanding efforts to advance the future of voice-based computing. The accelerator will kick off in March 2021, bringing together a cohort of 10 to 12 innovative voice technology startups. If this sounds like your startup, we’d love to hear from you. Applications are open until January 28, 2021.

Read More

Announcing gRPC Kotlin 1.0 for Android and Cloud

Posted by Louis Wasserman, Software Engineer and James Ward, Developer Advocate

Kotlin is now the fourth “most loved” programming language with millions of developers using it for Android, server-side / cloud backends, and various other target runtimes. At Google, we’ve been building more of our apps and backends with Kotlin to take advantage of its expressiveness, safety, and excellent support for writing asynchronous code with coroutines.

Since everything in Google runs on top of gRPC, we needed an idiomatic way to do gRPC with Kotlin. Back in April 2020 we announced the open sourcing of gRPC Kotlin, something we’d originally built for ourselves. Since then we’ve seen over 30,000 downloads and usage in Android and Cloud. The community and our engineers have been working hard polishing docs, squashing bugs, and making improvements to the project; culminating in the shiny new 1.0 release! Dive right in with the gRPC Kotlin Quickstart!

For those new to gRPC & Kotlin let’s do a quick runthrough of some of the awesomeness. gRPC builds on Protocol Buffers, aka “protos” (language agnostic & high performance data interchange) and adds the network protocol for efficiently communicating with protos. From a proto definition the servers, clients, and data transfer objects can all be generated. Here is a simple gRPC proto:

message HelloRequest {
string name = 1;
}

message HelloReply {
string message = 1;
}

service Greeter {
rpc SayHello (HelloRequest) returns (HelloReply) {}
}

In a Kotlin project you can then define the implementation of the Greeter’s SayHello service with something like:

object : GreeterGrpcKt.GreeterCoroutineImplBase() {
override suspend fun sayHello(request: HelloRequest) =
HelloReply
.newBuilder()
.setMessage("hello, ${request.name}")
.build()
}

You’ll notice that the function has `suspend` on it because it uses Kotlin’s coroutines, a built-in way to handle async / reactive IO. Check out the server example project.

With gRPC the client “stubs” are generated making it easy to connect to gRPC services. For the protoc above, the client stub can be used in Kotlin with:

val stub = GreeterCoroutineStub(channel)
val request = HelloRequest.newBuilder().setName("world").build()
val response = stub.sayHello(request)
println("Received: ${response.message}")

In this example the `sayHello` method is also a `suspend` function utilizing Kotlin coroutines to make the reactive IO easier. Check out the client example project.

Kotlin also has an API for doing reactive IO on streams (as opposed to requests), called Flow. gRPC Kotlin generates client and server stubs using the Flow API for stream inputs and outputs. The proto can define a service with unary streaming or bidirectional streaming, like:

service Greeter {
rpc SayHello (stream HelloRequest) returns (stream HelloReply) {}
}

In this example, the server’s `sayHello` can be implemented with Flows:

object : GreeterGrpcKt.GreeterCoroutineImplBase() {
override fun sayHello(requests: Flow<HelloRequest>): Flow<HelloReply> {
return requests.map { request ->
println(request)
HelloReply.newBuilder().setMessage("hello, ${request.name}").build()
}
}
}

This example just transforms each `HelloRequest` item on the flow to an item in the output / `HelloReply` Flow.

The bidirectional stream client is similar to the coroutine one but instead it passes a Flow to the `sayHello` stub method and then operates on the returned Flow:

val stub = GreeterCoroutineStub(channel)
val helloFlow = flow {
while(true) {
delay(1000)
emit(HelloRequest.newBuilder().setName("world").build())
}
}

stub.sayHello(helloFlow).collect { helloResponse ->
println(helloResponse.message)
}

In this example the client sends a `HelloRequest` to the server via Flow, once per second. When the client gets items on the output Flow, it just prints them. Check out the bidi-streaming example project.

As you’ve seen, creating data transfer objects and services around them is made elegant and easy with gRPC Kotlin. But there are a few other exciting things we can do with this…

Android Clients

Protobuf compilers can have a “lite” mode which generates smaller, higher performance classes which are more suitable for Android. Since gRPC Kotlin uses gRPC Java it inherits the benefits of gRPC Java’s lite mode. The generated code works great on Android and there is a `grpc-kotlin-stub-lite` artifact which depends on the associated `grpc-protobuf-lite`. Using the generated Kotlin stub client is just like on the JVM. Check out the stub-android example and android example.

GraalVM Native Image Clients

The gRPC lite mode is also a great fit for GraalVM Native Image which turns JVM-based applications into ahead-of-time compiled native images, i.e. they run without a JVM. These applications can be smaller, use less memory, and start much faster so they are a good fit for auto-scaling and Command Line Interface environments. Check out the native-client example project which produces a nice & small 14MB executable client app (no JVM needed) and starts, connects to the server, makes a request, handles the response, and exits in under 1/100th of a second using only 18MB of memory.

Google Cloud Ready

Backend services created with gRPC Kotlin can easily be packaged for deployment in Kubernetes, Cloud Run, or really anywhere you can run docker containers or JVM apps. Cloud Run is a cloud service that runs docker containers and scales automatically based on demand so you only pay when your service is handling requests. If you’d like to give a gRPC Kotlin service a try on Cloud Run:

  1. Deploy the app with a few clicks
  2. In Cloud Shell, run the client to connect to your app on the cloud:
    export PROJECT_ID=PUT_YOUR_PROJECT_ID_HERE
    docker run -it gcr.io/$PROJECT_ID/grpc-hello-world-mvn
    "java -cp target/classes:target/dependency/* io.grpc.examples.helloworld.HelloWorldClientKt YOUR_CLOUD_RUN_DOMAIN_NAME"

Here is a video of what that looks like:

Check out more Cloud Run gRPC Kotlin examples

Thank You!

We are super excited to have reached 1.0 for gRPC Kotlin and are incredibly grateful to everyone who filed bugs, sent pull requests, and gave the pre-releases a try! There is still more to do, so if you want to help or follow along, check out the project on GitHub.

Also huge shoutouts to Brent Shaffer, Patrice Chalin, David Winer, Ray Tsang, Tyson Henning, and Kevin Bierhoff for all their contributions to this release!

Read More

Treble Plus One Equals Four

Posted by Iliyan Malchev (Project Treble Architect), Amith Dsouza (Technical Account Manager) , and Veerendra Bhora (Strategic Partnerships Manager)

Illustration of phone with settings logo in the screen

Extending Android updates on Qualcomm’s Mobile Platforms

In the past few years, the latest Android OS has been adopted earlier by OEMs and deployed in larger numbers to our users. The growth in adoption has been driven by OEMs delivering faster OS updates, taking advantage of the architecture introduced by Project Treble.

At the time Android 11 launched there were 667M active users on Android 10, 82% of whom got their Android 10 build via an over the air (OTA) update. Despite the events throughout 2020, there is a continued momentum among our partners to either launch their devices on Android 11 or offer Android 11 OTAs on their devices earlier.

Line graph comparing Android Pie, Android 10, and Android 11

Our efforts till now have been focussed on making OS updates easier and faster to deploy. The other side of this coin is supporting updates for a longer period of time, and today we’d like to provide an overview of the changes we are making to help our partners achieve this.

Project Treble was an ambitious re-architecture of Android that created a split between the OS framework and device-specific low-level software (called the vendor implementation) through a well-defined, stable vendor interface. As a part of this split, the Android OS framework guarantees backward compatibility with the vendor implementation, which is checked through a standardized compliance test suite – VTS. With each Android release, Project Treble publishes Generic System Images (GSIs) that are built from AOSP sources, and are guaranteed to be backwards-compatible with the previous 3 versions of vendor implementations, in addition of course to the current release—for a total span of four years. Devices launching with the new Android release must have vendor implementations compatible with that GSI. This is the primary vehicle for reducing fragmentation within the OS framework. While we allow and encourage our partners to modify the framework itself, the modifications post-Treble must be done in a way that reduces upgrade costs from one version to the next.

Besides the reuse of a vendor implementation across OS updates, the Treble architecture also facilitates the re-use of the same OS framework code across different vendor implementations.

Chart comparing Original OS framework to Updated OS framework

Another important change introduced by Project Treble is that new vendor-impacting requirements for Android devices are never retroactive. They apply only to devices launching on that Android version and not to devices upgrading from an older version. The term vendor-impacting here refers to requirements for new HALs, or for the shipping of a newer Linux kernel, to the device’s vendor implementation. A good example might be a new revision of the camera HAL to support multiple rear camera sensors. Since the Android framework guarantees compatibility with the older HALs, we enable older vendor implementations to be reused by OEMs for upgrades without the considerable cost of updating them with new requirements.

This principle, combined with the backwards-compatibility guarantee, gives device manufacturers (OEMs) the flexibility to support upgrades both faster (since they have to upgrade just the framework, which would cover all of their devices, including those with older versions of the vendor implementation), as well as at a lower cost (since they do not have to touch the older vendor implementations).

However, seen from a System-on-Chip manufacturers’ perspective, this design introduces additional complexity. For each SoC model, the SoC manufacturers now needed to create multiple combinations of vendor implementations to support OEMs who would use that chipset to launch new devices and deploy OS upgrades on previously launched devices.

The result is that three years beyond the launch of a chipset, the SoC vendor would have to support up to 6 combinations of OS framework software and vendor implementations. The engineering costs associated with this support limited the duration for which SoC vendors offered Android OS software support on a chipset. For every single chipset, the software support timeline would look like this:

Timeline of OS framework

Considering that SoC providers have dozens of SoC models at any point of time, the full picture looks closer to this:

More accurate support timeline

The crux of the problem was that, while device requirements were never retroactive, the requirements for SoCs were. For example on Android Pie, SoCs had to support two versions of the Camera HAL API on a chipset if it was used to support new device launches and upgrades.

From this perspective, the solution was simple: we had to extend the no-retroactivity principle to the SoCs as well as to devices. With this change, the SoC provider would be able to support Android with the same vendor implementations on their SoCs for device launches as well as upgrades.

During the past year, we have been working hard to implement this solution. Building on our deep collaboration with our colleagues at Qualcomm, today we’re announcing the results of this work. Going forward, all new Qualcomm mobile platforms that take advantage of the no-retroactivity principle for SoCs will support 4 Android OS versions and 4 years of security updates. All Qualcomm customers will be able to take advantage of this stability to further lower both the costs of upgrades as well as launches and can now support their devices for longer periods of time.

Going one step further, we’re also reusing the same OS framework software across multiple Qualcomm chipsets. This dramatically lowers the number of OS framework and vendor implementation combinations that Qualcomm has to support across their mobile platforms and results in lowered engineering, development, and deployment costs. The diagram below indicates how significant the simplification is. From a software-support perspective, it’s an altogether different situation:

Framework timeline with simplification

This change is taking effect with all SoCs launching with Android 11 and later. By working closely with Qualcomm to offer an extended period of OS and security updates, we are looking forward to delivering the best of Android to our users faster, and with greater security for an extended period of time.

Read More

2020 Google Assistant developer Year in Review

Posted by Payam Shodjai, Director, Product Management Google Assistant

With 2020 coming to a close, we wanted to reflect on everything we have launched this year to help you, our developers and partners, create powerful voice experiences with Google Assistant.

Today, many top brands and developers turn to Google Assistant to help users get things done on their phones and on Smart Displays. Over the last year, the number of Actions built by third-party developers has more than doubled. Below is a snapshot of some of our partners who’ve integrated with Google Assistant:

2020 Highlights

Below are a few highlights of what we have launched in 2020:

1. Integrate your Android mobile Apps with Google Assistant

App Actions allow your users to jump right into existing functionality in your Android app with the help of Google Assistant. It makes it easier for users to find what they’re looking for in your app in a natural way by using their voice. We take care of all the Natural Language Understanding (NLU) processing, making it easy to develop in only a few days. In 2020, we announced that App Actions are now available for all Android developers to voicify their apps and integrate with Google Assistant.

For common tasks such as opening your apps, opening specific pages in your apps or searching within apps, we introduced Common Intents. For a deeper integration, we’ve expanded our vertical-specific built-in intents (BIIs), to cover more than 60 intents across 10 verticals, adding new categories like Social, Games, Travel & Local, Productivity, Shopping and Communications.

For cases where there isn’t a built-in intent for your app functionality, you can instead create custom intents that are unique to your Android app. Like BIIs, custom intents follow the actions.xml schema and act as connection points between Assistant and your defined fulfillments.

Learn more about how to integrate your app with Google Assistant here.

2. Create new experiences for Smart Displays

We also announced new developer tools to help you build high quality, engaging experiences to reach users at home by building for Smart Displays.

Actions Builder is a new web-based IDE that provides a graphical interface to show the entire conversation flow. It allows you to manage Natural Language Understanding (NLU) training data and provides advanced debugging tools. And, it is fully integrated into the Actions Console so you can now build, debug, test, release, and analyze your Actions – all in one place.

Actions SDK, a file based representation of your Action and the ability to use a local IDE. The SDK not only enables local authoring of NLU and conversation schemas, but it also allows bulk import and export of training data to improve conversation quality. The Actions SDK is accompanied by a command line interface, so you can build and manage an Action fully in code using your favorite source control and continuous integration tools.

Interactive Canvas allows you to add visual, immersive experiences to Conversational Actions. We announced the expansion of Interactive Canvas to support Storytelling and Education verticals earlier this year.

Continuous Match Mode allows the Assistant to respond immediately to a user’s speech for more fluid experiences by recognizing defined words and phrases set by you.

We also created a central hub for you to find resources to build games on Smart Displays. This site is filled with a game design playbook, interviews with game creators, code samples, tools access, and everything you need to create awesome games for smart displays.

Actions API provides a new programmatic way to test your critical user journeys more thoroughly and effectively, to help you ensure your Action’s conversations run smoothly.

The Dialogflow migration tool inside the Actions Console automates much of the work to move projects to the new and improved Actions Builder tool.

We also worked with partners such as Voiceflow and Jovo, to launch integrations to support voice application development on the Assistant. This effort is part of our commitment to enable you to leverage your favorite development tools, while building for Google Assistant.

We launched several other new features that help you build high quality experiences for the home, such as Media APIs, new and improved voices (available in Actions Console), home storage API.

Get started building for Smart Displays here.

3. Discovery features

Once you build high quality Actions, you are ready for your users to discover them. We have designed new touch points to help your users easily learn about your Actions..

For example, on Android mobile, we’ll be recommending relevant Apps Actions even when the user doesn’t mention the app’s name explicitly by showing suggestions. Google Assistant will also be suggesting apps proactively, depending on individual app usage patterns. Android mobile users will also be able to customize their experience, creating their own way to automate their most common tasks with app shortcuts, enabling people to set up quick phrases to enable app functions they frequently use. By simply saying “Hey Google, shortcuts”, they can set up and explore suggested shortcuts in the settings screen. We’ll also make proactive suggestions for shortcuts throughout Google Assistants’ mobile experience, tailored to how you use your phone.

Assistant Links deep link to your conversational Action to deliver rich Google Assistant experiences to your websites, so you can send your users directly to your conversational Actions from anywhere on the web.

We also recently opened two new built-in intents (BIIs) for public registration: Education and Storytelling. Registering your Actions for these intents allows your users to discover them in a simple, natural way through general requests to Google Assistant on Smart Displays. People will then be able to say “Hey Google, teach me something new” and they will be presented with a browsable selection of different education experiences. For stories, users can simply say “Hey Google, tell me a story”.

We know you build personalized and premium experience for your users, and need to make it easy for them to connect their accounts to your Actions. To help streamline this process we opened two betas for improved account linking flows that will allow simple, streamlined authentication via apps.

  • Link with Google enables anyone with an Android or iOS app where they are already logged in to complete the linking flow with just a few clicks, without needing to re-enter credentials.
  • App Flip helps you build a better mobile account linking experience, so your users can seamlessly link their accounts to Google without having to re-enter their credentials.

What to expect in 2021

Looking ahead, we will double down on enabling you, our developers and partners to build great experiences for GoogleAssistant and help you reach your users on the go and at home. You can expect to hear more from us on how we are improving the Google Assistant experience to make it easy for Android developers to integrate their Android app with Google Assistant and also help developers achieve success through discovery and monetization.

We are excited to see what you will build with these new features and tools. Thank you for being a part of the Google Assistant ecosystem. We can’t wait to launch even more features and tools for Android developers and Smart Display experiences in 2021.

Want to stay in the know with announcements from the Google Assistant team? Sign up for our monthly developer newsletter here.

Read More

Opening the Google Play Store for more car apps

Posted by Eric Bahna, Product Manager

In October, we published the Android for Cars App Library to beta so you could start bringing your navigation, parking, and charging apps to Android Auto. Thanks for sending your feedback with our issue tracker, so we know where to improve and clarify things. Now we’re ready to take the next step in delivering great in-car experiences.

Today, you can publish your apps to closed testing tracks in the Google Play Store. This is a great way to get feedback on how well your app meets the app quality guidelines, plus get your in-car experience in front of your first Android Auto users.

 Image of T map

Image of PlugShare

 Image of 2GIS

Three of our early access partners: T map, PlugShare,and 2GIS

We’re preparing the Play Store for open testing tracks soon. You can get your app ready today by publishing to closed testing. We’re eager to see what you’ve built!

Read More