Song stuck in your head? Just hum to search

Do you know that song that goes, “da daaaa da da daaaa na naa naa ooohh yeah”? Or the one that starts with the guitar chords going, “da na na naa”? We all know how frustrating it is when you can’t remember the name of a song or any of the words but the tune is stuck in your head. Today at Search On, we announced that Google can now help you figure it out—no lyrics, artist name or perfect pitch required. 

Hum to search for your earworm

Starting today, you can hum, whistle or sing a melody to Google to solve your earworm. On your mobile device, open the latest version of the Google app or find your Google Search widget, tap the mic icon and say “what’s this song?” or click the “Search a song” button. Then start humming for 10-15 seconds. On Google Assistant, it’s just as simple. Say “Hey Google, what’s this song?” and then hum the tune. This feature is currently available in English on iOS, and in more than 20 languages on Android. And we hope to expand this to more languages in the future.

After you’re finished humming, our machine learning algorithm helps identify potential song matches. And don’t worry, you don’t need perfect pitch to use this feature. We’ll show you the most likely options based on the tune. Then you can select the best match and explore information on the song and artist, view any accompanying music videos or listen to the song on your favorite music app, find the lyrics, read analysis and even check out other recordings of the song when available. 

Hum to Search

How machines learn melodies 

So how does it work? An easy way to explain it is that a song’s melody is like its fingerprint: They each have their own unique identity. We’ve built machine learning models that can match your hum, whistle or singing to the right “fingerprint.”

When you hum a melody into Search, our machine learning models transform the audio into a number-based sequence representing the song’s melody. Our models are trained to identify songs based on a variety of sources, including humans singing, whistling or humming, as well as studio recordings. The algorithms also take away all the other details, like accompanying instruments and the voice’s timbre and tone. What we’re left with is the song’s number-based sequence, or the fingerprint.

We compare these sequences to thousands of songs from around the world and identify potential  matches in real time. For example, if you listen to Tones and I’s “Dance Monkey,” you’ll recognize the song whether it was sung, whistled, or hummed. Similarly, our machine learning models recognize the melody of the studio-recorded version of the song, which we can use to match it with a person’s hummed audio. 

This builds on the work of our AI Research team’s music recognition technology. We launched Now Playing on the Pixel 2 in 2017, using deep neural networks to bring low-power recognition of music to mobile devices. In 2018, we brought the same technology to the SoundSearch feature in the Google app and expanded the reach to a catalog of millions of songs. This new experience takes it a step further, because now we can recognize songs without the lyrics or original song. All we need is a hum.

Hum to Search technology

So next time you can’t remember the name of some catchy song you heard on the radio or that classic jam your parents love, just start humming. You’ll have your answer in record time. 

Read More

Search On 2020

Visual ways to search and understand our world

Whether you’re a student learning about photosynthesis or a parent researching the best cars for your growing family, people turn to Google with all sorts of curiosities. And we can help you understand in different ways—through text, your voice or even your phone’s camera. Today, as part of the SearchOn event, we’re announcing new ways you can use Google Lens and augmented reality (AR) while learning and shopping.

Visual tools to help you learn 

For many families, adjusting to remote learning hasn’t been easy, but tools like Google Lens can help lighten the load. With Lens, you can search what you see using your camera. Lens can now recognize 15 billion things—up from 1 billion just two years ago—to help you identify plants, animals, landmarks and more. If you’re learning a new language, Lens can also translate more than 100 languages, such as Spanish and Arabic, and you can tap to hear words and sentences pronounced out loud

If you’re a parent, your kids may ask you questions about things you never thought you’d need to remember, like quadratic equations. From the search bar in the Google app on Android and iOS, you can use Lens to get help on a homework problem. With step-by-step guides and videos, you can learn and understand the foundational concepts to solve math, chemistry, biology and physics problems.

Lens Homework

Sometimes, seeing is understanding. For instance, visualizing the inner workings of a plant cell or the elements in the periodic table in 3D is more helpful than reading about them in a textbook. AR brings hands-on learning home, letting you explore concepts up close in your space. Here’s how Melissa Brophy-Plasencio, an educator from Texas, is incorporating AR into her lesson plans.

Melissa Brophy-Plasencio, an educator from Texas, shares how she's using AR into her science lessons.

Shop what you see with Google Lens 

Another area where the camera can be helpful is shopping—especially when what you’re looking for is hard to describe in words. With Lens, you can already search for a product by taking a photo or screenshot. Now, we’re making it even easier to discover new products as you browse online on your phone. When you tap and hold an image on the Google app or Chrome on Android, Lens will find the exact or similar items, and suggest ways to style it. This feature is coming soon to the Google app on iOS.

Lens Shopping

Lens uses Style Engine technology which combines the world’s largest database of products with millions of style images. Then, it pattern matches to understand concepts like “ruffle sleeves” or “vintage denim” and how they pair with different apparel. 

Bring the showroom to you with AR

When you can’t go into stores to check out a product up close, AR can bring the showroom to you. If you’re in the market for a new car, for example, you’ll soon be able to search for it on Google and see an AR model right in front of you. You can easily check out what the car looks like in different colors, zoom in to see intricate details like buttons on the dashboard, view it against beautiful backdrops and even see it in your driveway. We’re experimenting with this feature in the U.S. and working with top auto brands, such as Volvo and Porsche, to bring these experiences to you soon.

ColorSwaps_Volvo_560_sq.gif

AR experience of the 2020 Volvo XC40 Recharge

Everyone’s journey to understand is different. Whether you snap a photo with Lens or immerse yourself in AR, we hope you find what you’re looking for…

Ladybug.gif

…and even have some fun along the way.

Read More

How AI is powering a more helpful Google

When I first came across the web as a computer scientist in the mid-90s, I was struck by the sheer volume of information online, in contrast with how hard it was to find what you were looking for. It was then that I first started thinking about search, and I’ve been fascinated by the problem ever since. 

We’ve made tremendous progress over the past 22 years, making Google Search work better for you every day. With recent advancements in AI, we’re making bigger leaps forward in improvements to Google than we’ve seen over the last decade, so it’s even easier for you to find just what you’re looking for. Today during our Search On livestream, we shared how we’re bringing the most advanced AI into our products to further our mission to organize the world’s information and make it universally accessible and useful.

Helping you find exactly what you’re
looking for

At the heart of Google Search is our ability to understand your query and rank relevant results for that query. We’ve invested deeply in language understanding research, and last year we introduced how BERT language understanding systems are helping to deliver more relevant results in Google Search. Today we’re excited to share that BERT is now used in almost every query in English, helping you get higher quality results for your questions. We’re also sharing several new advancements to search ranking, made possible through our latest research in AI: 

Spelling
We’ve continued to improve our ability to understand misspelled words, and for good reason—one in 10 queries every day are misspelled. Today, we’re introducing a new spelling algorithm that uses a deep neural net to significantly improve our ability to decipher misspellings. In fact, this single change makes a greater improvement to spelling than all of our improvements over the last five years.

Spelling

A new spelling algorithm helps us understand the context of misspelled words, so we can help you find the right results, all in under 3 milliseconds.

Passages
Very specific searches can be the hardest to get right, since sometimes the single sentence that answers your question might be buried deep in a web page. We’ve recently made a breakthrough in ranking and are now able to not just index web pages, but individual passages from the pages. By better understanding the relevancy of specific passages, not just the overall page, we can find that needle-in-a-haystack information you’re looking for. This technology will improve 7 percent of search queries across all languages as we roll it out globally.

Understanding Passages

With new passage understanding capabilities, Google can understand that the specific passage (R) is a lot more relevant to a specific query than a broader page on that topic (L).

Subtopics
We’ve applied neural nets to understand subtopics around an interest, which helps deliver a greater diversity of content when you search for something broad. As an example, if you search for “home exercise equipment,” we can now understand relevant subtopics, such as budget equipment, premium picks, or small space ideas, and show a wider range of content for you on the search results page. We’ll start rolling this out by the end of this year.

Topic Understanding

Access to high quality information during COVID-19

We’re making several new improvements to help you navigate your world and get things done more safely and efficiently. Live busyness updates show you how busy a place is right now so you can more easily social distance, and we’ve added a new feature to Live View to help you get essential information about a business before you even step inside. We’re also adding COVID-19 safety information front and center on Business Profiles across Google Search and Maps. This will help you know if a business requires you to wear a mask, if you need to make an advance reservation, or if the staff is taking extra safety precautions, like temperature checks. And we’ve used our Duplex conversational technology to help local businesses keep their information up-to-date online, such as opening hours and store inventory.

Understanding key moments in videos

Using a new AI-driven approach, we’re now able to understand the deep semantics of a video and automatically identify key moments. This lets us tag those moments in the video, so you can navigate them like chapters in a book. Whether you’re looking for that one step in a recipe tutorial, or the game-winning home run in a highlights reel, you can easily find those moments. We’ve started testing this technology this year, and by the end of 2020 we expect that 10 percent of searches on Google will use this new technology.

Key Moments - Baseball

Deepening understanding through data

Sometimes the best search result is a statistic. But often stats are buried in large datasets and not easily comprehensible or accessible online. Since 2018, we’ve been working on the Data Commons Project, an open knowledge database of statistical data started in collaboration with the U.S. Census, Bureau of Labor Statistics, World Bank and many others. Bringing these datasets together was a first step, and now we’re making this information more accessible and useful through Google Search.

Now when you ask a question like “how many people work in Chicago ,” we use natural language processing to map your search to one specific set of the billions of data points in Data Commons to provide the right stat in a visual, easy to understand format. You’ll also find other relevant data points and context—like stats for other cities—to help you easily explore the topic in more depth.

Data Commons

Helping quality journalism through advanced search

Quality journalism often comes from long-term investigative projects, requiring time consuming work sifting through giant collections of documents, images and audio recordings. As part of Journalist Studio, our new suite of tools to help reporters do their work more efficiently, securely, and creatively through technology, we’re launching Pinpoint, a new tool that brings the power of Google Search to journalists. Pinpoint helps reporters quickly sift through hundreds of thousands of documents by automatically identifying and organizing the most frequently mentioned people, organizations and locations. Reporters can sign up to request access to Pinpoint starting this week.

Search what you see, and explore
information in 3D

For many topics, seeing is key to understanding. Several new features in Lens and AR in Google Search help you learn, shop, and discover the world in new ways. Many of us are dealing with the challenges of learning from home, and with Lens, you can now get step-by-step homework help on math, chemistry, biology and physics problems. Social distancing has also dramatically changed how we shop, so we’re making it easier to visually shop for what you’re looking for online, whether you’re looking for a sweater or want a closer look at a new car but can’t visit a showroom.
 

If you don’t know how to search it, sing it

We’ve all had that experience of having a tune stuck in our head, but can’t quite remember the lyrics. Now, when those moments arise, you just have to hum to search, and our AI models can match the melody to the right song.
 

What sets Google Search apart

There has never been more choice in the ways people access information, and we need to constantly develop cutting-edge technology to ensure that Google remains the most useful and most trusted way to search. Four key elements form the foundation for all our work to improve Search and answer trillions of queries every year. These elements are what makes Google helpful and reliable for the people who come to us each day to find information.

Understanding all the world’s information
We’re focused on deeply understanding all the world’s information, whether that information is contained in words on web pages, in images or videos, or even in the places and objects around us. With investments in AI, we’re able to analyze and understand all types of information in the world, just as we did by indexing web pages 22 years ago. We’re pushing the boundaries of what it means to understand the world, so before you even type in a query, we’re ready to help you explore new forms of information and insights never before available. 

The highest quality information 
People rely on Search for the highest quality information available, and our commitment to quality has always been what set Google apart from day one. Every year we launch thousands of improvements to make Search better, and rigorously test each of these changes to ensure people find them helpful. Our ranking factors and policies are applied fairly to all websites, and this has led to widespread access to a diversity of information, ideas and viewpoints.

World class privacy and security
To keep people and their data safe, we invest in world class privacy and security. We’ve led the industry in keeping you safe while searching with Safe Browsing and spam protection. We believe that privacy is a universal right and are committed to giving every user the tools they need to be in control.

Open access for everyone
Last—but certainly not least—we are committed to open access for everyone. We aim to help the open web thrive, sending more traffic to the open web every year since Google was created. Google is free for everyone, accessible on any device, in more than 150 languages around the world, and we continue to expand our ability to serve people everywhere.

So wherever you are, whatever you’re looking for, however you’re able to sing, spell, say, or visualize it, you can search on with Google.

Read More

How Google autocomplete predictions are generated

You come to Google with an idea of what you’d like to search for. As soon as you start typing, predictions appear in the search box to help you finish what you’re typing. These time-saving predictions are from a feature called Autocomplete, which we covered previously in this How Search Works series.

In this post, we’ll explore how Autocomplete’s predictions are automatically generated based on real searches and how this feature helps you finish typing the query you already had in mind. We’ll also look at why not all predictions are helpful, and what we do in those cases.

Where predictions come from

Autocomplete predictions reflect searches that have been done on Google. To determine what predictions to show, our systems begin by looking at common and trending queries that match what someone starts to enter into the search box. For instance, if you were to type in “best star trek…”, we’d look for the common completions that would follow, such as “best star trek series” or “best star trek episodes.”

Autocomplete star trek

That’s how predictions work at the most basic level. However, there’s much more involved. We don’t just show the most common predictions overall. We also consider things like the language of the searcher or where they are searching from, because these make predictions far more relevant. 

Below, you can see predictions for those searching for “driving test” in the U.S. state of California versus the Canadian province of Ontario. Predictions differ in naming relevant locations or even spelling “centre” correctly for Canadians rather than using the American spelling of “center.”

Autocomplete driving test

To provide better predictions for long queries, our systems may automatically shift from predicting an entire search to portions of a search. For example, we might not see a lot of queries for “the name of the thing at the front” of some particular object. But we do see a lot of queries for “the front of a ship” or “the front of a boat” or “the front of a car.” That’s why we’re able to offer these predictions toward the end of what someone is typing.

Autocomplete name of a thing

We also take freshness into account when displaying predictions. If our automated systems detect there’s rising interest in a topic, they might show a trending prediction even if it isn’t typically the most common of all related predictions that we know about. For example, searches for a basketball team are probably more common than individual games. However, if that team just won a big face-off against a rival, timely game-related predictions may be more useful for those seeking information that’s relevant in that moment.

Predictions also will vary, of course, depending on the specific topic that someone is searching for. People, places and things all have different attributes that people are interested in. For example, someone searching for “trip to New York” might see a prediction of “trip to New York for Christmas,” as that’s a popular time to visit that city. In contrast, “trip to San Francisco” may show a prediction of “trip to San Francisco and Yosemite.” Even if two topics seem to be similar or fall into similar categories, you won’t always see the same predictions if you try to compare them.  Predictions will reflect the queries that are unique and relevant to a particular topic.

Overall, Autocomplete is a complex time-saving feature that’s not simply displaying the most common queries on a given topic. That’s also why it differs from and shouldn’t be compared against Google Trends, which is a tool for journalists and anyone else who’s interested to research the popularity of searches and search topics over time.

Predictions you likely won’t see

Predictions, as explained, are meant to be helpful ways for you to more quickly finish completing something you were about to type. But like anything, predictions aren’t perfect. There’s the potential to show unexpected or shocking predictions. It’s also possible that people might take predictions as assertions of facts or opinions. We also recognize that some queries are less likely to lead to reliable content.

We deal with these potential issues in two ways. First and foremost, we have systems designed to prevent potentially unhelpful and policy-violating predictions from appearing. Secondly, if  our automated systems don’t catch predictions that violate our policies, we have enforcement teams that remove predictions in accordance with those policies.

Our systems are designed to recognize terms and phrases that might be violent, sexually-explicit, hateful, disparaging or dangerous. When we recognize that such content might surface in a particular prediction, our systems prevent it from displaying. 

People can still search for such topics using those words, of course. Nothing prevents that. We’re simply not wanting to unintentionally shock or surprise people with predictions they might not have expected.

Using our automated systems, we can also recognize if a prediction is unlikely to return much reliable content. For example, after a major news event, there can be any number of unconfirmed rumors or information spreading, which we would not want people to think Autocomplete is somehow confirming. In these cases, our systems identify if there’s likely to be reliable content on a particular topic for a particular search. If that likelihood is low, the systems might automatically prevent a prediction from appearing. But again, this doesn’t stop anyone from completing a search on their own, if they wish.

While our automated systems typically work very well, they don’t catch everything. This is why we have policies for Autocomplete, which we publish for anyone to read. Our systems aim to prevent policy-violating predictions from appearing. But if any such predictions do get past our systems, and we’re made aware (such as through public reporting options), our enforcement teams work to review and remove them, as appropriate. In these cases, we remove both the specific prediction in question and often use pattern-matching and other methods to catch closely-related variations.

As an example of all this in action, consider our policy about names in Autocomplete, which began in 2016. It’s designed to prevent showing offensive, hurtful or inappropriate queries in relation to named individuals, so that people aren’t potentially forming an impression about others solely off predictions.  We have systems that aim to prevent these types of predictions from showing for name queries. But if violations do get through, we remove them in line with our policies. 

You can always search for what you want

Having discussed why some predictions might not appear, it’s also helpful to remember that predictions are not search results. Occasionally, people concerned about predictions for a particular query might suggest that we’re preventing actual search results from appearing. This is not the case. Autocomplete policies only apply to predictions. They do not apply to search results. 

We understand that our protective systems may prevent some useful predictions from showing. In fact, our systems take a particularly cautious approach when it comes to names and might prevent some non-policy violating predictions from appearing. However, we feel that taking this cautious approach is best. That’s especially because even if a prediction doesn’t appear, this does not impact the ability for someone to finish typing a query on their own and finding search results. 

We hope this has helped you understand more about how we generate predictions that allow you to more quickly complete the query you started, whether that’s while typing on your laptop or swiping the on-screen keyboard on your phone.

Read More

If you’ve got it, haunt it: Halloween 2020 costume trends

This year’s Frightgeist is a real treat. From cats in taco costumes and baby sharks to Supergirl and fun flamingos, we just couldn’t wait until Oct. 31 to start the festivities.

No matter how you’re celebrating this year, Halloween is the perfect reason to ditch the quarantine couture and get dressed up in some spook-tacular attire. To give you some ideas as holiday prep begins, we took a look at what costumes were trending last month in the United States.

Your favorite frightful fashions

This September’s trending Halloween costumes are action-packed: Martial arts gurus and dungeon masters take the lead, followed by Jedis and cowgirls that are out of this world—literally. 

  1. Cobra Kai

  2. Dungeon Master 

  3. The Mandalorian 

  4. Space Cowgirl 

  5. Trolls 

  6. Belle 

  7. Marshmello 

  8. Inflatable shark 

  9. Firefighter

  10. Sanderson sisters 

Cool costumes for kids

Which witch will be the go-to Halloween costume for kids this year? Three of the top 10 trending costumes for kids last month are famous witches. But if that’s not your thing, bats and werewolves are two fright-astic options. 

  1. Supergirl 

  2. Flamingo 

  3. Hocus Pocus 

  4. Witch 

  5. Glinda 

  6. Robot 

  7. Maui

  8. Bat 

  9. Sally 

  10. Werewolf 

We also took a look at the most-searched costumes across the U.S. in September, including costumes for couples, babies and pets. 

Thinking of coupling up?

Several classic couples kept their spots in 2020: Bonnie and Clyde, Lilo and Stitch as well as The Fairly Oddparents’ Cosmo and Wanda are the top three most-searched couples costumes for the second year in a row. If you want to change it up, options like  “Lydia and Beetlejuice” or “Coraline and Wybie” are new to the list.  

  1. Bonnie and Clyde

  2. Lilo and Stitch

  3. Cosmo and Wanda

  4. Coraline and Wybie

  5. Lydia and Beetlejuice

  6. Mario and Luigi

  7. Woody and Jessie

  8. Angel and Devil

  9. Phineas and Ferb

  10. Sharkboy and Lavagirl

Put your pets on parade 

What’s cuter than a cat wrapped in taco or a Corgi dressed as a dinosaur? Last month’s most-searched pet costumes will definitely have our pets earning some treats this Halloween—no tricks required.

  1. Cat taco 

  2. Corgi stegosaurus 

  3. Twinkie 

  4. Beetlejuice 

  5. Fish 

  6. Woody dog 

  7. Chucky

  8. Frog 

  9. Pumpkin 

  10. Raccoon 

Babies can say “boo!,” too

Lions, tigers, and baby sharks—oh my! Baby animals could make the perfect costume for little ones this Halloween. 

  1. Baby shark 

  2. Baby Yoda

  3. Baby pumpkin 

  4. Boss baby 

  5. Baby dinosaur 

  6. Baby Olaf 

  7. Baby chicken 

  8. Baby tiger 

  9. Baby bat 

  10. Baby lion 

For more of what people are searching in your city and around the country, check out our interactive Frightgeist map. Witch-ing you a Happy Halloween!

Read More

The Search Console Training lives on

In November 2019 we announced the Search Console Training YouTube series and started publishing videos regularly. The goal of the series was to create updated video content to be used alongside Search documentation, for example in the Help Center and in the Developers site.

The wonderful Google Developer Studio team (the engine behind those videos!) put together this fun blooper reel for the first wave of videos that we recorded in the Google London studio.

So far we’ve published twelve episodes in the series, each focusing on a different part of the tool. We’ve seen it’s helping lots of people to learn how to use Search Console – so we decided to continue recording videos… at home! Please bear with the trucks, ambulances, neighbors, passing clouds, and of course the doorbell. ¯_(ツ)_/¯

In addition to the location change, we’re also changing the scope of the new videos. Instead of focusing on one report at a time, we’ll discuss how Search Console can help YOUR business. In each episode we’ll focus on types of website, like ecommerce, and job roles, like developers.

To hear about new videos as soon as they’re published, subscribe to our YouTube channel, and feel free to leave feedback on Twitter.

Stay tuned!

Daniel Waisberg, Search Advocate

Read More

Why is the sky orange? How Google gave people the right info

On the morning of September 10, millions of people in Northern California woke up to an orange sky after wildfire smoke spread like a thick layer across the West Coast. It persisted for days, and it was the first time lots of people had ever seen something like this. 

To understand what was happening, many people turned to Search. According to Google Trends, searches for “why is the sky orange” hit an all-time high this month in the United States. As you can see in the graph below, this wasn’t a totally new query. There are many pages on the web with general scientific explanations of what can cause the sky to turn orange. But people wanted to know why, in that moment, where they were, the sky was tangerine tinted.

Google Trends Data.png

Search interest for “why is the sky orange” since 2004, US (Google Trends)

So how does Google respond to a query spike like this? Well,language understanding is at the core of Search, but it’s not just about the words. Critical context, like time and place, also helps us understand what you’re really looking for. This is particularly true for featured snippets, a feature in Search that highlights pages that our systems determine are likely a great match for your search. We’ve made improvements to better understand when fresh or local information — or both — is key to delivering relevant results to your search. 

In the case of the orange sky phenomenon, for people in Northern California, the time and location was really important to understanding what these searches were looking for. Our freshness indicators identified a rush of new content was being produced on this topic that was both locally relevant and different from the more evergreen content that existed. This signaled to our systems to ignore most of the specifics that they previously understood about the topic of “orange sky”–like the relation to a sunset–but to retain broad associations like “air” and “ocean” that were still relevant. In a matter of minutes, our systems learned this new pattern and provided fresh featured snippet results for people looking for this locally relevant information in the Bay Area.

Why is the sky orange.png

Put simply, instead of surfacing general information on what causes a sunset, when people searched for “why is the sky orange” during this time period, our systems automatically pulled in current, location-based information to help people find the timely results they were searching for. 

Over the course of the week, we saw even more examples of these systems at work. As a residual effect of the wildfires, New York City and Massachusetts started experiencing a hazy sky. But that wasn’t the case in all states. So for a query like “why is it hazy?” local context was similarly important for providing a relevant result.

NYC Search Results.png

For this query, people in New York found an explanation of how the wildfire smoke was caught in a jet stream, which caused the haze to move east. People in Boston would have found a similar feature snippet, but specific to the conditions in that city. And those in Alaska, who were not impacted, would not see these same results. 

These are just two of billions of queries we get each day, and as new searches arise and information in the world changes, we’ll continue to provide fresh, relevant results in these moments.

Read More

The rise and fall and rise again of “now more than ever”

One of my favorite Google tools is the Google Books Ngram Viewer, or “Ngrams.” Originally created in 2009 by part of the Google Books team, Ngrams shows how books and other pieces of literature have used certain words or phrases over time. You can chart the rise (and fall) of colloquialisms like “sockdollager” or “take the egg”—or even “that slaps.” 

“Ngrams simply aggregates the use of words or phrases across the entire Google Books dataset,” says Michael Ballbach, a software engineer who works on Google Books. “It then allows users to graph the usage of those words or phrases through time.” Each word being searched is a “gram” that the tool searches across its database. 

Ngrams’s capabilities have grown recently, thanks to an update in 2019 that added approximately 19 million more books to its dataset. “For the English language corpus, that adds trillions of words,” Michael says. For context, that’s roughly the equivalent of three million copies of “War and Peace!”

But there’s one phrase—er, four grams—that’s been surfacing more and more during these…challenging, unprecedented, uncertain, unusual times that I’m particularly interested in: “Now more than ever.” 

Perhaps you’ve even noticed it? “Now more than ever” has invaded our vernacular; in fact, I’m sure you’ve read it (or a similar phrase) in a Keyword post or two. So I decided to dive into Ngrams to see if “now more than ever” is showing up…now more than ever. While we’re currently experiencing a spike, there have been others: In the early 1940s, around 1915-1920 and in 1866. Between 1805-1809 it was particularly high—nearly as high as it is today.  

And then of course there was the banner year of 1752, when things peaked for “now more than ever.” 

Now more than ever

Today, as we’re living through a pandemic, wildfires, racial injustice and so, so much more, it feels obvious why we’re increasingly saying and hearing “now more than ever,” but what about back then? What things made people feel like everything had a certain crucialness? 

While the Ngrams team doesn’t investigate the causes of the booms and busts of words and phrases, for this particular exercise, I thought a little about what could have possibly been happening during these periods of “now more than ever.” I can imagine in the 1940s, World War II changed the lives of people everwhere. 1915-1920 was marked by World War I—and of course, the influenza pandemic of 1918. In 1866, the United States was emerging from civil war. 1805 to 1809 was a heady time for the young U.S. government.

“If you have the time or inclination, you can use Books Search to try and get some insights,” Michael explains. So I plugged in “now more than ever,” searched under Books, and toggled the time settings for 1751 to 1753 to try and see if I could glean anything about the peak year of 1752. And while I can’t say I know what about that time really pushed the “now more than everness,” a handful of British literary journals were definitely using the phrase. 

But things don’t stay at a “now more than ever” pitch. From 1955 to 1996, “now more than ever” was relatively uncommon, before climbing steeply up through the late 90s and early aughts to today. 

Maybe you, like me, may find some comfort in knowing that this moment in time—as unprecedented, challenging and uncertain as it may be—is not the only one in which everything is “now more than ever.” Maybe you, too, can appreciate the light Ngrams sheds on the lives of the words we choose. 

“I think that language is evolving just like society is evolving. That is, language is a reflection of the society that used it, and vice versa,” Michael says. “How the use of language changes over time reflects at least some of the changes taking place in the wider world. Having better tools to look at one can hopefully lead to insights in the other.” 

And if you’re feeling very “now more than ever,” just remember: This too shall pass

Read More

Travel digitally with Google on World Tourism Day

September 27 is World Tourism Day – a time to celebrate tourism’s ability to promote meaningful exchanges between people around the world, have fun, recall how travel helps us all recharge – and make a real difference by supporting livelihoods and protecting our heritage. 

This year may have changed our ability to travel across the globe, but our desire to experience new cultures, see far-off places or discover hidden gems in our own backyard has not diminished. 

Today, Google Arts & Culture has brought together a new collection to help anyone choose their perfect virtual travel with thousands of museums and cultural destinations to explore. And with the help of our partner CyArk, we’ve launched on Google Search 37 cultural heritage sites from across the world in Augmented Reality (AR).

Hop from your couch and search on your mobile phone to bring the Moai statues of Ahu Ature Huki, Rapa Nui (Easter Island), the Brandenburg Gate in Germany, or the Maya pyramid of Chichén Itzá, Mexico right into your living room.

  • ThomasJeffersonMemorial.png

    Thomas Jefferson Memorial and El Castillo, Chichen Itza projected in AR

  • BrandenburgGate.png

    Brandenburg Gate, Germany

  • GatewayOfIndia.png

    Gateway of India, Mumbai

You can read more about what it takes for CyArk to capture just one site in “Documenting the Thomas Jefferson Memorial” and discover how this work helps global conservation efforts communicate the impacts of climate change to iconic places like Rapa Nui.

Continue your journey on Google Arts & Culture

There are plenty more sites to visit virtually: let Google Arts & Culture be your guide to discover some of the world’s most amazing destinations, from the Wonders of Mexico, the USA, France and many more to some amazing city breaks, action-packed adventures and paradise escapes.

Let your favorite creator take you on a tour

Finally, travel like a local, and explore Andalucia with YouTube creator Kikillo, join a virtual walk around Milan with Instagram creator Federica di Nardo, or listen to the sounds of Florence with The Whispering Traveller.

All this, and more than 10,000 destinations and 2,000 collections are ready to be explored on Google Arts & Culture at g.co/culturaltravel. And if Augmented Reality really has you hooked, make sure to check out a few other cool things including Dinosaurs, the Skeletal System and Apollo 11 by looking them up in Google Search.

Read More