We recently updated Google Tasks to limit the number of nested tasks a user can create (also known as a “subtask”) to be just level. While subtasks are supported, any tasks nested beyond these—like subtasks of subtasks and so on—are no longer available.
Starting August 30, 2019, we will introduce the same structure in the Tasks API. This means that as of this date, tasks that are nested beyond more than one level will no longer be supported. Tasks that are nested beyond one level will automatically be converted to subtasks.
Who is impacted? If you’re a developer with an application that supports multiple levels of subtasks, like apps for project management, note taking or checklists, you may be impacted. This could also include applications that are reliant on the Tasks API.
What should you do? Please check and update your applications accordingly if they support multiple levels of subtasks.
These changes will take effect on August 30, 2019, so you will have close to a year to implement these changes in your application.
As you go about your busy day, every minute matters. We’re evolving the design of Wear OS by Google to help you get the most out of your time—providing quicker access to your information and notifications, more proactive help from the Google Assistant, and smarter health coaching—all with a swipe of your finger.
Easier access to notifications and more
We’re making it easier to browse, dismiss or take action on your notifications with the new notification stream. Simply swipe up to see all your notifications at once. See an important message? Just tap to select a built-in smart reply without even leaving your stream. Swipe down on your watch to get quicker access to handy features and shortcuts like Google Pay or ‘Find my phone’.
More proactive help from your Google Assistant
With the new design, you can now receive proactive and personalized help from your Google Assistant. Let’s say you’re headed to the airport—swipe right on your watch to see your flight status or hotel reservation. Tap on smart suggestions like the weather at your destination or find a restaurant near your hotel. When you’re getting ready for the day, your Google Assistant will help you stay ahead by reminding you to bring an umbrella, showing you your day’s meetings, or warning you if there is a delay on your commute. The Google Assistant will also suggest features you may not have tried yet and will become more helpful over time as it gets to know you and as we add new features.
Smarter health coaching
Last week, we announced that Google Fit is making it easier to be healthy with two new activity goals: Heart Points and Move Minutes. We worked with the American Heart Association and the World Health Organization to design these goals based on their physical activity recommendations which are shown to have health benefits for your heart and mind. Now, you can simply swipe left to start a workout or see how you are tracking toward your goals.
We’ll begin rolling out these new features over the next month, so look out for updates on your Wear OS by Google smartwatch. Some features may vary by phone OS, watch or country.
There’s a lot of talk out there about how to stay active and healthy: “get your steps in,” “sitting is the new smoking,” “no pain, no gain.” It can feel overwhelming. So we’ve worked with the American Heart Association (AHA) and the World Health Organization (WHO) to understand the science behind physical activity and help you get the amount and intensity needed to improve your health.
Activity goals to improve your health
The new Google Fit is centered around two simple and smart activity goals based on AHA and WHO’s activity recommendations shown to impact health: Move Minutes and Heart Points.
When it comes to your health, it’s important to move more and sit less. Earn Move Minutes for all of your activity and get motivated to make small, healthy changes throughout your day, like taking the stairs instead of the elevator, or catching up with a friend over a walk instead of a coffee.
Activities that get your heart pumping harder result in even greater health benefits. Heart Points give you credit for these activities. You’ll score one point for each minute of moderate activity, like picking up the pace while walking your dog, and double points for more intense activities like running or kickboxing. It takes just 30 minutes of brisk walking 5 days a week to reach the AHA and WHO’s recommended amount of physical activity, which is shown to reduce the risk of heart disease, improve sleep, and increase overall mental well-being.
However you move, make it count
When you’re walking, running or biking throughout the day, Google Fit will automatically detect these activities using your phone or watch sensors—like the accelerometer and GPS—to estimate the number of Heart Points you earn. If you’re into a different type of exercise, you can choose other activities like gardening, pilates, rowing or spinning, and Google Fit will calculate the Heart Points and Move Minutes achieved during your workout. Google Fit also integrates with other fitness apps like Strava, Runkeeper, Endomondo and MyFitnessPal, so you get credit for every Move Minute and Heart Point you earn. You’ll get tips and help to adjust your goals over time based on your activity. Your journal will show your activities, achievements and goal progress across all of your apps.
If you already use Google Fit on Android phone or Wear OS by Google watch, you’ll see these changes on your phone or smartwatch beginning this week. If you’re new to Google Fit, learn more at google.com/fit and join us on our way to a healthier and more active life.
Many of society’s most pressing problems have grown increasingly complex, so the search for solutions can feel overwhelming. At DeepMind and Google, we believe that if we can use AI as a tool to discover new knowledge, solutions will be easier to reach.
In 2016, we jointly developed an AI-powered recommendation system to improve the energy efficiency of Google’s already highly-optimized data centers. Our thinking was simple: Even minor improvements would provide significant energy savings and reduce CO2 emissions to help combat climate change.
Now we’re taking this system to the next level: instead of human-implemented recommendations, our AI system is directly controlling data center cooling, while remaining under the expert supervision of our data center operators. This first-of-its-kind cloud-based control system is now safely delivering energy savings in multiple Google data centers.
How it works
Every five minutes, our cloud-based AI pulls a snapshot of the data center cooling system from thousands of sensors and feeds it into our deep neural networks, which predict how different combinations of potential actions will affect future energy consumption. The AI system then identifies which actions will minimize the energy consumption while satisfying a robust set of safety constraints. Those actions are sent back to the data center, where the actions are verified by the local control system and then implemented.
The idea evolved out of feedback from our data center operators who had been using our AI recommendation system. They told us that although the system had taught them some new best practices—such as spreading the cooling load across more, rather than less, equipment—implementing the recommendations required too much operator effort and supervision. Naturally, they wanted to know whether we could achieve similar energy savings without manual implementation.
We’re pleased to say the answer was yes!
We wanted to achieve energy savings with less operator overhead. Automating the system enabled us to implement more granular actions at greater frequency, while making fewer mistakes.
Data Center Operator, Google
Designed for safety and reliability
Google’s data centers contain thousands of servers that power popular services including Google Search, Gmail and YouTube. Ensuring that they run reliably and efficiently is mission-critical. We’ve designed our AI agents and the underlying control infrastructure from the ground up with safety and reliability in mind, and use eight different mechanisms to ensure the system will behave as intended at all times.
One simple method we’ve implemented is to estimate uncertainty. For every potential action—and there are billions—our AI agent calculates its confidence that this is a good action. Actions with low confidence are eliminated from consideration.
Another method is two-layer verification. Optimal actions computed by the AI are vetted against an internal list of safety constraints defined by our data center operators. Once the instructions are sent from the cloud to the physical data center, the local control system verifies the instructions against its own set of constraints. This redundant check ensures that the system remains within local constraints and operators retain full control of the operating boundaries.
Most importantly, our data center operators are always in control and can choose to exit AI control mode at any time. In these scenarios, the control system will transfer seamlessly from AI control to the on-site rules and heuristics that define the automation industry today.
Find out about the other safety mechanisms we’ve developed below:
Increasing energy savings over time
Whereas our original recommendation system had operators vetting and implementing actions, our new AI control system directly implements the actions. We’ve purposefully constrained the system’s optimization boundaries to a narrower operating regime to prioritize safety and reliability, meaning there is a risk/reward trade off in terms of energy reductions.
Despite being in place for only a matter of months, the system is already delivering consistent energy savings of around 30 percent on average, with further expected improvements. That’s because these systems get better over time with more data, as the graph below demonstrates. Our optimization boundaries will also be expanded as the technology matures, for even greater reductions.
Our direct AI control system is finding yet more novel ways to manage cooling that have surprised even the data center operators. Dan Fuenffinger, one of Google’s data center operators who has worked extensively alongside the system, remarked: “It was amazing to see the AI learn to take advantage of winter conditions and produce colder than normal water, which reduces the energy required for cooling within the data center. Rules don’t get better over time, but AI does.”
We’re excited that our direct AI control system is operating safely and dependably, while consistently delivering energy savings. However, data centers are just the beginning. In the long term, we think there’s potential to apply this technology in other industrial settings, and help tackle climate change on an even grander scale.
Many of societys most pressing problems have grown increasingly complex, so the search for solutions can feel overwhelming. At DeepMind and Google, we believe that if we can use AI as a tool to discover new knowledge, solutions will be easier to reach.In 2016, we jointly developed an AI-powered recommendation system to improve the energy efficiency of Googles already highly-optimised data centres. Our thinking was simple: even minor improvements would provide significant energy savings and reduce CO2 emissions to help combat climate change.Now were taking this system to the next level: instead of human-implemented recommendations, our AI system is directly controlling data centre cooling, while remaining under the expert supervision of our data centre operators. This first-of-its-kind cloud-based control system is now safely delivering energy savings in multiple Google data centres.Read More…
Since introducing Google Photos, we’ve aspired to be the home for all of your photos, helping you bring together a lifetime of memories in one place.
We offer two options to back up your photos and videos: original quality and high quality. Starting today in the U.S., those of you who back up in original quality and pay for expanded storage will be upgraded to Google One. For the same price or in some cases less, you’ll now have extra space and additional benefits—like 24/7 support from Google experts—to help you get more out of Google. There are no changes to our high quality backup option.
Recently, we introduced Google One, a plan that gives you expanded storage and helps you get more out of Google. Over the past few months, people with Google Drive paid plans have been upgraded to Google One. And starting today, people in the U.S. can choose to upgrade to Google One.
More storage for what matters
Google One gives you more storage across Drive, Gmail, and Photos. With plenty of space, your most important memories and files are stored safely in the cloud and available on all your devices. We’ve improved the price of some of our plans and added new plan options, so you can find one that works for you.
More help when you need it
As a Google One member, you can easily get in touch with a team of Google experts 24/7 to answer your questions—whether you need help recovering a file you accidentally deleted or want to learn how to use Gmail when you’re offline.
More benefits for members
With Google One, you’ll also get extra benefits across Google. We’ve started with credits on Google Play and deals on hotels found in Google Search. In the coming months, keep an eye out for Google Store and Google Express benefits and more.
More for your family
You can also share your plan with up to five additional family members. That means simplified storage under one bill, and access to the benefits of Google One.
We’re just getting started and will be rolling out to more countries over the next few weeks. If you’re in the U.S. and would like to upgrade, visit the Google One website.
We are delighted to announce the results of the first phase of our joint research partnership with Moorfields Eye Hospital, which could potentially transform the management of sight-threatening eye disease.The results, published online inNature Medicine(open access full text, see end of blog), show that our AI system can quickly interpret eye scans from routine clinical practice with unprecedented accuracy. It can correctly recommend how patients should be referred for treatment for over 50 sight-threatening eye diseases as accurately as world-leading expert doctors.These are early results, but they show that our system could handle the wide variety of patients found in routine clinical practice. In the long term, we hope this will help doctors quickly prioritise patients who need urgent treatment which could ultimately save sight.A more streamlined processCurrently, eyecare professionals use optical coherence tomography (OCT) scans to help diagnose eye conditions. These 3D images provide a detailed map of the back of the eye, but they are hard to read and need expert analysis to interpret.The time it takes to analyse these scans, combined with the sheer number of scans that healthcare professionals have to go through (over 1,000 a day at Moorfields alone), can lead to lengthy delays between scan and treatment even when someone needs urgent care.Read More…
Reading time: 2 minutes
The spark that starts the journey
Believe in yourself and don’t overthink
Be flexible to go with the times
Quick tips that linger
Visual and audio events tend to occur together: a musician plucking guitar strings and the resulting melody; a wine glass shattering and the accompanying crash; the roar of a motorcycle as it accelerates. These visual and audio stimuli are concurrent because they share a common cause. Understanding the relationship between visual events and their associated sounds is a fundamental way that we make sense of the world around us.In Look, Listen, and Learn and Objects that Sound (to appear at ECCV 2018), we explore this observation by asking: what can be learnt by looking at and listening to a large number of unlabelled videos? By constructing an audio-visual correspondence learning task that enables visual and audio networks to be jointly trained from scratch, we demonstrate that:the networks are able to learn useful semantic concepts;the two modalities can be used to search one another (e.g. to answer the question, Which sound fits well with this image?); andthe object making the sound can be localised.Limitations of previous cross-modal learning approachesLearning from multiple modalities is not new; historically, researchers have largely focused on image-text or audio-vision pairings.Read More…