A research team at Carnegie Mellon University (CMU) has been working to make epidemiological forecasting as universal as weather forecasting. When COVID hit, they launched COVIDcast to develop data monitoring and forecasting resources that can help public health officials, researchers, and the public make informed decisions.
Last month, CMU received $1 million from Google.org and a team of thirteen Google.org Fellows to work pro bono for six months to help continue building out COVIDcast. This was part of Google.org’s $100 million commitment to COVID relief.
We caught up with Ryan Tibshirani, a research lead at CMU, to learn more about the project and what the Google.org fellows will work on.
Tell us a little bit about yourself.
I’m a faculty member at CMU, jointly appointed in Statistics and Machine Learning, and I’m very interested in epidemiological forecasting and tracking. In 2012, I cofounded Delphi centered on this topic with Roni Rosenfeld, Professor and Head of Machine Learning at CMU.
What do you focus on most these days?
Since the pandemic began I’ve spent all of my time on COVID-19 research. Delphi has quadrupled the number of researchers in just eight months and we’re laser-focused on COVID. Leading Delphi’s pandemic response effort has been both a challenge—I’ve never done anything like this before—and a joy—the group is full of amazing people.
How did you come up with the idea for COVIDcast?
To back up just a bit: Roni and I formed Delphi in 2012 with the goal to develop the theory and practice of epidemiological forecasting, primarily for seasonal influenza in the U.S. We want this technology to become as universally accepted and useful as today’s weather forecasting.
Our forecasting system has been a top performer at the Centers for Disease Control’s (CDC) annual forecasting challenges, and last year Delphi Group was named one of the two Centers of Excellence for Influenza Forecasting. I like to think of COVIDcast as a replica of what we’ve done for the flu but better and faster.
Break it down for us, what is COVIDcast?
The COVIDcast project is about building and providing an ecosystem for COVID-19 tracking and forecasting. Our aim is to support informed decision-making at federal, state, and local levels of government, in the healthcare sector, and beyond.
The project has many parts:
Unique relationships with tech and healthcare partners that give us access to data with different views of pandemic activity in the U.S;
Code and infrastructure to build new, geographically-detailed, continuously-updated COVID-19 indicators;
A historical database of all indicators, including revision tracking;
A public API that serves new indicators daily, along with interactive maps and graphics to display them;
And lastly, modeling work that builds on the indicators to improve nowcasting and forecasting the spread of COVID-19.
A key element of COVIDcast is that we make all of our work as open and accessible as possible to other researchers and the public to help amplify its impact. We share both our data and a range of software tools—from data processing and visualization to sophisticated statistical tools.
How will the Google.org funding and fellowship help?
This support will help Delphi expand our efforts to provide a geographically-detailed view of various aspects of the pandemic and to develop an early warning system for health officials, for example, when the number of cases in a locale are expected to rise. There will be more pandemics and epidemics after COVID-19. We want to be prepared, and we believe Delphi’s work can help us do that.
The Google.org Fellowship just kicked off. What are you most excited about?
Everything! We’re excited to embed all the Google.org Fellows—engineers, user experience designers and researchers, program and product managers—into our workstreams. We hope they can help accelerate our progress and introduce us to leading industry product and software development techniques. Each and every one of the fellows has special skills that will be put to good use. We can’t wait to see what we can achieve, together.
More broadly, what role does the tech sector play in COVID-19 response efforts?
An enormous role. The tech sector is uniquely positioned to provide data and platforms that even governments can’t provide. It also has the skills and experience to quickly assemble large-scale systems, in real time. Google has been extraordinarily helpful to us on all of these fronts.
Researchers around the world have used modelling techniques to find patterns in data and map the spread of COVID-19, in order to combat the disease. Modelling a complex global event is challenging, particularly when there are many variables—human behavior, evolving science and policy, and socio-economic issues—as well as unknowns about the virus itself. Teams across Google are contributing tools and resources to the broader scientific community of epidemiologists, analysts and researchers who are working with policymakers and public health officials to address the public health and economic crisis.
Organizing the world’s data for epidemiological researchers
Lack of access to useful high-quality data has posed a significant challenge, and much of the publicly available data is scattered, incomplete, or compiled in many different formats. To help researchers spend more of their time understanding the disease instead of wrangling data, we’ve developed a set of tools and processes to make it simpler for researchers to discover and work with normalized high-quality public datasets.
With the help of Google Cloud, we developed a COVID-19 Open Data repository—a comprehensive, open-source resource of COVID-19 epidemiological data and related variables like economic indicators or population statistics from over 50 countries. Each data source contains information on its origin, and how it’s processed so that researchers can confirm its validity and reliability. It can also be used with Data Commons, BigQuery datasets, as well as other initiatives which aggregate regional datasets.
This repository also includes two Google datasets developed to help researchers study the impact of the disease in a privacy-preserving manner. In April, we began publishing the COVID-19 Community Mobility Reports, which provide anonymized insights into movement trends to understand the impact of policies like shelter in place. These reports have been downloaded over 16 million times and are now updated three times a week in 64 languages, with localized insights covering 12,000 regions, cities and counties for 135 countries. Groups including the OECD, World Bank and Bruegel have used these reports in their research, and the insights inform strategies like how public health could safely unwind social distancing policies.
The latest addition to the repository is the Search Trends symptoms dataset, which aggregates anonymized search trends for over 400 symptoms. This will help researchers better understand the spread of COVID-19 and its potential secondary health impacts.
Tools for managing complex prediction modeling
The data that models rely upon may be imperfect due a range of factors, including a lack of widespread testing or inconsistent reporting. That’s why COVID-19 models need to account for uncertainty in order for their predictions to be reliable and useful. To help address this challenge, we’re providing researchers examples of how to implement bespoke epidemiological models using TensorFlow Probability (TFP), a library for building probabilistic models that can measure confidence in their own predictions. With TFP, researchers can use a range of data sources with different granularities, properties, or confidence levels, and factor that uncertainty into the overall prediction models. This could be particularly useful in fine-tuning the increasingly complex models that epidemiologists are using to understand the spread of COVID-19, particularly in gaining city or county-level insights when only state or national-level datasets exist.
While models can help predict what happens next, researchers and policymakers are also turning to simulations to better understand the potential impact of their interventions. Simulating these “what if” scenarios involve calculating highly variable social interactions at a massive scale. Simulators can help trial different social distancing techniques and gauge how changes to the movement of people may impact the spread of disease.
Google researchers have developed an open-source agent-based simulator that utilizes real-world data to simulate populations to help public health fine tune their exposure notification parameters. For example, the simulator can consider different disease and transmission characteristics, the number of places people visit, as well as the time spent in those locations. We also contributed to Oxford’s agent-based simulator by factoring in real world mobility and representative models of interactions within different workplace sectors to understand the effect of an exposure notification app on the COVID-19 pandemic.
The scientific and developer community are working on important work to understand and manage the pandemic. Whether it’s by contributing to open source initiatives or funding data science projects and providing Google.org Fellows, we’re committed to collaborating with researchers on efforts to build a more equitable and resilient future.
I never considered myself an addict until the day I found myself huddled under my covers at four in the afternoon, hungover and wishing my surroundings would disappear. This wasn’t the first time that had happened—in fact, it had become a weekly occurrence—but as I curled up into a ball, feeling pathetic and utterly alone, I realized I had no other options. I grabbed my phone from my nightstand and searched “rehab centers near me.”
I’d been dealing with major depression for years, and up until that moment I thought I had tried everything to find a cure. Special diets, an alphabet soup of antidepressant regimens, group therapy, solo therapy, transcranial magnetic stimulation, ketamine infusions. The only thing I hadn’t tried was sobriety. Drugs and alcohol were my only escape. I couldn’t fathom giving up the one thing that freed myself from the darkest grips of my own mind.
My Google search surfaced a number of local treatment centers, and after making some calls, I found one with a program that could help me. That was more than two years ago. Since then, thanks to hard work that continues today, I’ve remained sober and depression-free.
Most people in recovery would agree: you can’t do it alone. It’s a reciprocal relationship—my recovery community helps to keep me sober, and my sobriety allows me to play an active role in that community. Twelve-step programs, new habits and the support of others with similar experiences provide a foundation, and then I can build a life I never thought was possible to live when depression controlled my every moment.
That foundation has carried me through COVID-19. Staying sober during a global pandemic is a bit of a paradox. During a time when people are more isolated than ever before, turning to substances to self-soothe seems like a natural response. And the data backs that up: Google searches for “how to get clean” reached an all-time high in June, and “how to get sober” surged in June and then again in August. In the past 30 days, searches for “rehab near me” hit their second-highest peak in recorded history.
And yet sobriety—in an era where it’s harder than ever to stay sober—is precisely what’s gotten me through this time. Staying sober has let me be present with my emotions, to face my anxieties and difficulties head-on. While I can’t numb my feelings, I can protect my mental health. My recovery practice has allowed me to do just that: Daily gratitude lists remind me how fortunate I still am, my sponsor regularly offers wisdom and advice, my peers hold space for my challenges and I do the same for them.
In the throes of my own crisis, the first place I turned to for help was Google. I ended up at a rehab center that profoundly transformed the way I move through the world. Last September, as part of National Recovery Month, Google made these resources even easier to find with its Recover Together site. This year, Google is adding even more features, including a mapping tool that allows you to search for local support groups by simply typing in your zip code. Of course, the search results also include virtual meetings, now that many programs have moved online.
I’m proud to work for a company that prioritizes an issue that affects an estimated one in eight American adults and their loved ones. I’m proud to work for a company where I can take time from my day to attend 12-step meetings, no questions asked, and where I can bring my whole self to work and speak freely about my struggles. And I’m proud to work for a company that celebrates my experience as one of triumph rather than shame. That’s committed to reducing the stigma around addiction by providing resources for people like me.
Recovery doesn’t happen in a vacuum. I can’t do it all by myself, which is why I’m sharing my story today. I hope that even one person who has fought similar battles will read what I have to say and realize that they, too, aren’t in this alone.
Nonprofits, universities and other academic institutions around the world are turning to artificial intelligence (AI) and data analytics to help us better understand COVID-19 and its impact on communities—especially vulnerable populations and healthcare workers. To support this work, Google.org is giving more than $8.5 million to 31 organizations around the world to aid in COVID-19 response. Three of these organizations will also receive the pro-bono support of Google.org Fellowship teams.
This funding is part of Google.org’s $100 million commitment to COVID-19 relief and focuses on four key areas where new information and action is needed to help mitigate the effects of the pandemic.
Monitoring and forecasting disease spread
Understanding the spread of COVID-19 is critical to informing public health decisions and lessening its impact on communities. We’re supporting the development of data platforms to help model disease and projects that explore the use of diverse public datasets to more accurately predict the spread of the virus.
Improving health equity and minimizing secondary effects of the pandemic
COVID-19 has had a disproportionate effect on vulnerable populations. To address health disparities and drive equitable outcomes, we’re supporting efforts to map the social and environmental drivers of COVID-19 impact, such as race, ethnicity, gender and socioeconomic status. In addition to learning more about the immediate health effects of COVID-19, we’re also supporting work that seeks to better understand and reduce the long-term, indirect effects of the virus—ranging from challenges with mental health to delays in preventive care.
Slowing transmission by advancing the science of contact tracing and environmental sensing
Contact tracing is a valuable tool to slow the spread of disease. Public health officials around the world are using digital tools to help with contact tracing. Google.org is supporting projects that advance science in this important area, including research investigating how to improve exposure risk assessments while preserving privacy and security. We’re also supporting related research to understand how COVID-19 might spread in public spaces, like transit systems.
Supporting healthcare workers
Whether it’s working to meet the increased demand for acute patient care, adapting to rapidly changing protocols or navigating personal mental and physical wellbeing, healthcare workers face complex challenges on the frontlines. We’re supporting organizations that are focused on helping healthcare workers quickly adopt new protocols, deliver more efficient care, and better serve vulnerable populations.
Together, these organizations are helping make the community’s response to the pandemic more advanced and inclusive, and we’re proud to support these efforts. You can find information about the organizations Google.org is supporting below.
Monitoring and forecasting disease spread
Carnegie Mellon University*: informing public health officials with interactive maps that display real-time COVID-19 data from sources such as web surveys and other publicly-available data.
Keio University: investigating the reliability of large-scale surveys in helping model the spread of COVID-19.
University College London:modeling the prevalence of COVID-19 and understanding its impact using publicly-available aggregated, anonymized search trends data.
Boston Children’s Hospital, Oxford University, Northeastern University*: building a platform to support accurate and trusted public health data for researchers, public health officials and citizens.
Tel Aviv University: developing simulation models using synthetic data to investigate the spread of COVID-19 in Israel.
Kampala International University, Stanford University, Leiden University, GO FAIR: implementing data sharing standards and platforms for disease modeling for institutions across Uganda, Ethiopia, Nigeria, Kenya, Tunisia and Zimbabwe.
Improving health equity and minimizing secondary effects of the pandemic
Morehouse School of Medicine’s Satcher Health Leadership Institute*: developing an interactive, public-facing COVID-19 Health Equity Map of the United States.
Florida A&M University, Shaw University: examining structural social determinants of health and the disproportionate impact of COVID-19 in communities of color in Florida and North Carolina.
Boston University School of Public Health:investigating the drivers of racial, ethnic and socioeconomic disparities in the causes and consequences of COVID-19, with a focus on Massachusetts.
University of North Carolina, Vanderbilt University:investigating molecular mechanisms underlying susceptibility to SARS-CoV-2 and variability in COVID-19 outcomes in Hispanic/Latinx populations.
Beth Israel Deaconess Medical Center: quantifying the impact of COVID-19 on healthcare not directly associated with the virus, such as delayed routine or preventative care.
Georgia Institute of Technology:investigating opportunities for vulnerable populations to find information related to COVID-19.
Cornell Tech:developing digital tools and resources for advocates and survivors of intimate partner violence during COVID-19.
University of Michigan School of Information: evaluating health equity impacts of the rapid virtualization of primary healthcare.
Indian Institute of Technology Gandhinagar: modeling the impact of air pollution on COVID-related secondary health exacerbations.
Cornell University, EURECOM:developing scalable and explainable methods for verifying claims and identifying misinformation about COVID-19.
Slowing transmission by advancing the science of contact tracing and environmental sensing
Arizona State University:applying federated analytics (a state-of-the-art, privacy-preserving analytic technique) to contact tracing, including an on-campus pilot.
Stanford University:applying sparse secure aggregation to detect emerging hotspots.
University of Virginia, Princeton University, University of Maryland:designing and analyzing effective digital contact tracing methods.
University of Washington: investigating environmental SARS-CoV-2 detection and filtration methods in bus lines and other public spaces.
Indian Institute of Science, Bengaluru:mitigating the spread of COVID-19 in India’s transit systems with rapid testing and modified commuter patterns.
TU Berlin, University of Luxembourg:using quantum mechanics and machine learning to understand the binding of SARS-CoV-2 spike protein to human cells—a key process in COVID-19 infection.
Supporting healthcare workers
Medic Mobile, Dimagi: developing data analytics tools to support frontline health workers in countries such as India and Kenya.
Global Strategies:developing software to support healthcare workers adopting COVID-19 protocols in underserved, rural populations in the U.S., including Native American reservations.
C Minds:creating an open-source, AI-based support system for clinical trials related to COVID-19.
Hospital Israelita Albert Einstein:supporting and integrating community health workers and volunteers to help deliver mental health services and monitor outcomes in one of Brazil’s most vulnerable communities.
Fiocruz Bahia, Federal University of Bahia:establishing an AI platform for research and information-sharing related to COVID-19 in Brazil.
RAD-AID:creating and managing a data lake for institutions in low- and middle-income countries to pool anonymized data and access AI tools.
Yonsei University College of Medicine: scaling and distributing decision support systems for patients and doctors to better predict hospitalization and intensive care needs due to COVID-19.
University of California Berkeley and Gladstone Institutes: developing rapid at-home CRISPR-based COVID-19 diagnostic tests using cell phone technology.
Fondazione Istituto Italiano di Tecnologia:enabling open-source access to anonymized COVID-19 chest X-ray and clinical data, and researching image analysis for early diagnosis and prognosis.
*Recipient of a Google.org Fellowship
Search is often where people come to get answers on health and wellbeing, whether it’s to find a doctor or treatment center, or understand a symptom better just before a doctor’s visit. In the past, researchers have used Google Search data to gauge the health impact of heatwaves, improve prediction models for influenza-like illnesses, and monitor Lyme disease incidence. Today we’re making available a dataset of search trends for researchers to study the link between symptom-related searches and the spread of COVID-19. We hope this data could lead to a better understanding of the pandemic’s impact.
How search trends can support COVID-19 research
The COVID-19 Search Trends symptoms dataset includes aggregated, anonymized search trends for more than 400 symptoms, signs and health conditions, such as cough, fever and difficulty breathing. The dataset includes trends at the U.S. county-level from the past three years in order to make the insights more helpful to public health, and so researchers can account for changes in searches due to seasonality.
Public health currently uses a range of datasets to track and forecast the spread of COVID-19. Researchers could use this dataset to study if search trends can provide an earlier and more accurate indication of the reemergence of the virus in different parts of the country. And since measures such as shelter-in-place have reduced the accessibility of care and affected people’s wellbeing more generally, this dataset—which covers a broad range of symptoms and conditions, from diabetes to stress—could also be useful in studying the secondary health effects of the pandemic.
Advancing health research with privacy protections
The COVID-19 Search Trends symptoms dataset is powered by the same anonymization technology that we use in the Community Mobility Reports and other Google products every day. No personal information or individual search queries are included. The dataset was produced using differential privacy, a state-of-the-art technique that adds random noise to the data to provide privacy guarantees while preserving the overall quality of the data.
Similar to Google Trends, the data is normalized based on a symptom’s relative popularity, allowing researchers to study spikes in search interest over different time periods, without exposing any individual query or even the number of queries in any given area.
More information about the privacy methods used to generate the dataset can be found in this report.
This early release is limited to the United States and covers searches made in English and Spanish. It covers all states and many counties, where the available data meets quality and privacy thresholds. It was developed to specifically aid research on COVID-19, so we intend to make the dataset available for the duration of the pandemic.
As we receive feedback from public health researchers, civil society groups and the community at large, we’ll evaluate and expand this dataset by including additional countries and regions.
Researchers and public health experts are doing incredible work to respond to the pandemic. We hope this dataset will be useful in their work towards stopping the spread of COVID-19.
Prostate cancer diagnoses are common, with 1 in 9 men developing prostate cancer in their lifetime. A cancer diagnosis relies on specialized doctors, called pathologists, looking at biological tissue samples under the microscope for signs of abnormality in the cells. The difficulty and subjectivity of pathology diagnoses led us to develop an artificial intelligence (AI) system that can identify the aggressiveness of prostate cancer.
Since many prostate tumors are non-aggressive, doctors first obtain small samples (biopsies) to better understand the tumor for the initial cancer diagnosis. If signs of tumor aggressiveness are found, radiation or invasive surgery to remove the whole prostate may be recommended. Because these treatments can have painful side effects, understanding tumor aggressiveness is important to avoid unnecessary treatment.
Grading the biopsies
One of the most crucial factors in this process is to “grade” any cancer in the sample for how abnormal it looks, through a process called Gleason grading. Gleason grading involves first matching each cancerous region to one of three Gleason patterns, followed by assigning an overall “grade group” based on the relative amounts of each Gleason pattern in the whole sample. Gleason grading is a challenging task that relies on subjective visual inspection and estimation, resulting in pathologists disagreeing on the right grade for a tumor as much as 50 percent of the time. To explore whether AI could assist in this grading, we previously developed an algorithm that Gleason grades large samples (i.e. surgically-removed prostates) with high accuracy, a step that confirms the original diagnosis and informs patient prognosis.
In our recent work, “Development and Validation of a Deep Learning Algorithm for Gleason Grading of Prostate Cancer from Biopsy Specimens”, published in JAMA Oncology, we explored whether an AI system could accurately Gleason grade smaller prostate samples (biopsies). Biopsies are done during the initial part of prostate cancer care to get the initial cancer diagnosis and determine patient treatment, and so are more commonly performed than surgeries. However, biopsies can be more difficult to grade than surgical samples due to the smaller amount of tissue and unintended changes to the sample from tissue extraction and preparation process. The AI system we developed first “grades” each region of biopsy, and then summarizes the region-level classifications into an overall biopsy-level score.
Given the complexity of Gleason grading, we worked with six experienced expert pathologists to evaluate the AI system. These experts, who have specialized training in prostate cancer and an average of 25 years of experience, determined the Gleason grades of 498 tumor samples. Highlighting how difficult Gleason grading is, a cohort of 19 “general” pathologists (without specialist training in prostate cancer) achieved an average accuracy of 58 percent on these samples. By contrast, our AI system’s accuracy was substantially higher at 72 percent. Finally, some prostate cancers have ambiguous appearances, resulting in disagreements even amongst experts. Taking this uncertainty into account, the deep learning system’s agreement rate with experts was comparable to the agreement rate between the experts themselves.
These promising results indicate that the deep learning system has the potential to support expert-level diagnoses and expand access to high-quality cancer care. To evaluate if it could improve the accuracy and consistency of prostate cancer diagnoses, this technology needs to be validated as an assistive tool in further clinical studies and on larger and more diverse patient groups. However, we believe that AI-based tools could help pathologists in their work, particularly in situations where specialist expertise is limited.
Our research advancements in both prostate and breast cancer were the result of collaborations with the Naval Medical Center San Diego and support from Verily. Our appreciation also goes to several institutions that provided access to de-identified data, and many pathologists who provided advice or reviewed prostate cancer samples. We look forward to future research and investigation into how our technology can be best validated, designed and used to improve patient care and cancer outcomes.
Age-related macular degeneration (AMD) is the biggest cause of sight loss in the UK and USA and is the third largest cause of blindness across the globe. The latest research collaboration between Google Health, DeepMind and Moorfields Eye Hospital is published in Nature Medicine today. It shows that artificial intelligence (AI) has the potential to not only spot the presence of AMD in scans, but also predict the disease’s progression.
Vision loss and wet AMD
Around 75 percent of patients with AMD have an early form called “dry” AMD that usually has relatively mild impact on vision. A minority of patients, however, develop the more sight-threatening form of AMD called exudative, or “wet” AMD. This condition affects around 15 percent of patients, and occurs when abnormal blood vessels develop underneath the retina. These vessels can leak fluid, which can cause permanent loss of central vision if not treated early enough.
Wet AMD often affects one eye first, so patients become heavily reliant upon their unaffected eye to maintain their normal day-to-day living. Unfortunately, 20 percent of these patientswill go on to develop wet AMD in their other eye within two years. The condition often develops suddenly but further vision loss can be slowed with treatments if wet AMD is recognized early enough. Ophthalmologists regularly monitor their patients for signs of wet AMD using 3D optical coherence tomography (OCT) images of the retina.
The period before wet AMD develops is a critical window for preventive treatment, which is why we set out to build a system that could predict whether a patient with wet AMD in one eye will go on to develop the condition in their second eye. This is a novel clinical challenge, since it’s not a task that is routinely performed.
How AI could predict the development of wet AMD
In collaboration with colleagues at DeepMind and Moorfields Eye Hospital NHS Foundation Trust, we’ve developed an artificial intelligence (AI) model that has the potential to predict whether a patient will develop wet AMD within six months. In the future, this system could potentially help doctors plan studies of earlier intervention, as well as contribute more broadly to clinical understanding of the disease and disease progression.
We trained and tested our model using a retrospective, anonymized dataset of 2,795 patients. These patients had been diagnosed with wet AMD in one of their eyes, and were attending one of seven clinical sites for regular OCT imaging and treatment. For each patient, our researchers worked with retinal experts to review all prior scans for each eye and determine the scan when wet AMD was first evident. In collaboration with our colleagues at DeepMind we developed an AI system composed of two deep convolutional neural networks, one taking the raw 3D scan as input and the other, built on our previous work, taking a segmentation map outlining the types of tissue present in the retina. Our prediction system used the raw scan and tissue segmentations to estimate a patient’s risk of progressing to wet AMD within the next six months.
To test the system, we presented the model with a single, de-identified scan and asked it to predict whether there were any signs that indicated the patient would develop wet AMD in the following six months. We also asked six clinical experts—three retinal specialists and three optometrists, each with at least ten years’ experience—to do the same. Predicting the possibility of a patient developing wet AMD is not a task that is usually performed in clinical practice so this is the first time, to our knowledge, that experts have been assessed on this ability.
While clinical experts performed better than chance alone, there was substantial variability between their assessments. Our system performed as well as, and in certain cases better than, these clinicians in predicting wet AMD progression. This highlights its potential use for informing studies in the future to assess or help develop treatments to prevent wet AMD progression.
Future work could address several limitations of our research. The sample was representative of practice at multiple sites of the world’s largest eye hospital, but more work is needed to understand the model performance in different demographics and clinical settings. Such work should also understand the impact of unstudied factors—such as additional imaging tests—that might be important for prediction, but were beyond the scope of this work.
These findings demonstrate the potential for AI to help improve understanding of disease progression and predict the future risk of patients developing sight-threatening conditions. This, in turn, could help doctors study preventive treatments.
This is the latest stage in our partnership with Moorfields Eye Hospital NHS Foundation Trust, a long-standing relationship that transitioned from DeepMind to Google Health in September 2019. Our previous collaborations include using AI to quickly detect eye conditions, and showing how Google Cloud AutoML might eventually help clinicians without prior technical experience to accurately detect common diseases from medical images.
This is early research, rather than a product that could be implemented in routine clinical practice. Any future product would need to go through rigorous prospective clinical trials and regulatory approvals before it could be used as a tool for doctors. This work joins a growing body of research in the area of developing predictive models that could inform clinical research and trials. In line with this, Moorfields will be making the dataset available through the Ryan Initiative for Macular Research. We hope that models like ours will be able to support this area of work to improve patient outcomes.
Dr. Karen DeSalvo knows how to deal with a crisis. She was New Orleans Health Commissioner following Hurricane Katrina and a senior official at the Department of Health and Human Services when Ebola broke out. And now, as Google’s Chief Health Officer, she’s become the company’s go-to medical expert, advising our leaders on how to react to the coronavirus. Dr. DeSalvo has been a voice of reassurance for Googlers, but her expertise is helpful outside of Google, too. I recently spoke to Dr. DeSalvo about how we’ll get through the crisis, what Google is doing to help and what makes her optimistic despite the challenges we face.
How is the coronavirus different from other public health crises you’ve dealt with?
In my work in New Orleans, whether it was a hurricane, a fire or a power outage, we drew resources from other parts of the country if we needed help. In this case, the entire world has been impacted. Everyone is living with uncertainty, disrupted supply chains, impacts on travel and social infrastructure. While this creates a sense of community that I hope will continue beyond the pandemic, the downside is that we have less opportunity to send assistance to other places. Where there is opportunity, we’ve seen people paying it forward, like when California deployed ventilators to the East Coast. The sense of community that grows out of any disaster is the bright spot, for me.
How are industries sharing ideas and research in this global crisis?
Physicians are using technology to talk to each other constantly about what they’re seeing and doing, and in prior outbreaks this real-time communication wasn’t possible. It makes a huge difference in clinical care. In the medical community, you sometimes have to pay for a journal article. But now if you want to read about COVID-19, it’s free for any researcher, scientist, clinician or layperson. That’s putting information first, putting knowledge and science above proprietary interest.
It’s happening in science, too. For instance, there’s a collaboration between competitors in the private sector on designing trials and assessing the outcome of drugs and vaccines. At Google, our Deepmind colleagues were able to use deep learning to show protein folding, helping advance the thinking about therapeutics and vaccines. I don’t think we’ve seen this spirit of collaboration in the history of science, and it’s one of the reasons I’m so optimistic.
What is Google doing to help curb misinformation?
In this historic moment, access to the right information at the right time will save lives. Period. This is why our Search teams design our ranking systems to promote the most relevant and reliable information available. We build these protections in advance so they’re ready when a crisis hits, and this approach serves as a strong defense against misinformation.
When COVID-19 began to escalate, we built features on top of those fundamental protections to help people find information from local health authorities. We initially launched an SOS alert with the World Health Organization to make resources about COVID-19 easily discoverable. This has evolved into an expanded Search experience, providing easy access to more authoritative information, alongside new data and visualizations.
We’re surfacing content that’s accessible to a whole range of communities, and there’s constant vigilance to remove misinformation on platforms like YouTube—this includes videos or other information that could be harmful to people.
What does it mean to be Google’s Chief Health Officer?
My role is to bring a holistic view of emotional, physical and social health and well-being to Google’s products and services, particularly under Google Health. During this pandemic, my team has also thought about how Google can assist public health efforts. This has meant anything from the Community Mobility Reports, a tool to help measure the impact of social distancing, to building playlists in partnership with YouTube geared towards clinicians, and showing testing sites for COVID-19 all over the world.
In the general public, what behaviors or mentalities have arisen that should continue in the future?
First, there are fundamental ways to reduce the transmission of communicable diseases like the flu or, in some communities, measles or tuberculosis. If you’re able to, it’s important to stay home if you’re sick, wash your hands, cough into your elbow—I call these the “Grandma rules.” Second, there are a lot of components to health: social health, emotional well-being, financial stability. Health is driven by more than just medical care, and this is a moment for us to remember that a holistic approach matters.
What should business owners consider for when restrictions begin to lift?
They need to prepare for a world in which employees can work remotely as much as possible. Policies will still recommend social distancing, but we also need to create an environment where people who are sick feel comfortable staying home. That’s not realistic for every small business, so paying attention to the basic hygiene stuff—Do the Five—is also important.
After Katrina, there was this time when the world was paying attention and trying to help, but the emotional and social impact on our community lasted for months. There will be some of that after this pandemic, because you can’t just flip a switch and have people go back to work. That’s the important thing—being patient as people put themselves back into a normal routine.
Health is driven by more than just medical care, and this is a moment for us to remember that a holistic approach matters.
Taking off your Chief Health Officer hat, how do you reassure friends and family when they’re worried about this situation?
Medically, we need to be patient and let the scientists do their thing. It’s probably going to take until summer or early fall in the northern hemisphere to get clarity on what therapeutics work. The end game is to develop a vaccine so we can make sure everybody is protected. This is going to be a long journey with many months ahead, so we need to pace ourselves.
Statistically, more people will have anxiety and depression from COVID-19 than will actually get COVID-19. To share tips on mental well-being, we recently launched the “Be Kind To Your Mind” PSA on Google Search.
Lastly, I remind those who are privileged to have a safe space to stay home when other people can’t. I think about my previous work with low income patients, and how this crisis impacts them as well as communities of color, non-native English speakers, and individuals with disabilities. Staying home is not safe, comfortable and financially feasible for everybody. We should all be doing what we can for our neighbors and our friends and the people who aren’t always seen.
The coronavirus pandemic has disrupted lives around the world. In addition to the lives lost to the virus, as many communities enter the second and third month under stay-at-home orders, there is a rising mental health toll, too. In a national survey released by the American Psychiatric Association in March, 36 percent of respondents said that COVID-19 was seriously impacting their mental health; 48 percent were anxious about getting infected; and 57 percent reported concern that COVID-19 will seriously impact their finances.
As a trained psychiatrist, I know firsthand the importance of bringing out into the open the issue of mental health. While it might be years between the first onset of symptoms and someone seeking help, the internet is often the first place people turn to find out more about mental disorders. To help address the emerging mental health crisis we’re sharing “Be Kind to Your Mind,” which includes resources on mental wellbeing from the Centers for Disease Control and Prevention (CDC). Whenever people in the U.S. search for information about coping with the pandemic, or on COVID-19 and mental health, we’ll show a public service announcement with tips to cope with stress during COVID-19. To raise awareness of the importance of mental wellbeing during these times, we’ll highlight these resources on Google’s homepage tomorrow.
With May being Mental Health Awareness Month, we want to highlight a few other resources and tools across Google and YouTube that promote mental wellbeing.
Self-assessment questionnaires for depression and PTSD
When people search on Google for information about mental health conditions we provide panels with information from authoritative sources like Mayo Clinic that detail symptoms, treatments, and provide an overview of the different types of specialists who can help. On the info panels for depression and post traumatic stress disorder (PTSD), we provide direct access to clinically-validated self-assessment questionnaires that ask some of the same types of questions a mental health professional might ask. Based on a person’s answers, these self-assessment tools provide information on risk, along with links to more resources. Results to these questionnaires are not logged. We hope they can provide insight and help people have a more informed conversation with their doctor. We will add more self-assessment questionnaires over time to cover more conditions.
Self-care content on YouTube
Over the last few months, YouTube has seen a 35 percent increase in views of meditation videos, and growing popularity of mindfulness and wellbeing content. YouTube is making videos like these and other mental health resources more widely available to anyone around the world, for free, by spotlighting channels and playlists that have wellbeing and mindfulness-focused content. Countless YouTube creators, like Dr. Mike and Kati Morton, educate their communities as they help reduce the stigma associated with mental health. YouTube is also launching relevant YouTube Originals, including a “BookTube” episode featuring top authors like Melinda Gates and Elizabeth Gilbert offering their best book recommendations.
Finding virtual care options, quickly
Because of stay-at-home orders and restrictions that limit in-person interactions, many mental health care providers (including therapists and psychiatrists) are now providing telehealth care, like conducting therapy sessions over video conference. To make these options easier to find, we now allow providers to highlight their virtual care services on their Google Business Profile. So now, when you search for a mental health provider in products like Search and Maps, you may see an “Online care” link that can take you to their virtual care page, or even schedule a virtual appointment.
While the stigma around mental health has lessened in recent years, many people still find it hard to reach out to get help. By providing access to mental health resources, services and information across our products, we hope to make it easier for people to seek help and receive proper care.
Over the past four years, Google has advanced its AI technologies to address critical problems in healthcare. We’ve developed tools to detect eye disease, AI systems to identify cardiovascular risk factors and signs of anemia, and to improve breast cancer screening.
For these and other AI healthcare applications, the journey from initial research to useful product can take years. One part of that journey is conducting user-centered research. Applied to healthcare, this type of research means studying how care is delivered and how it benefits patients, so we can better understand how algorithms could help, or even inadvertently hinder, assessment and diagnosis.
Our research in practice
For our latest research paper, “A Human-Centered Evaluation of a Deep Learning System Deployed in Clinics for the Detection of Diabetic Retinopathy,” we built on a partnership with the Ministry of Public Health in Thailand to conduct field research in clinics across the provinces of Pathum Thani and Chiang Mai. It’s one of the first published studies examining how a deep learning system is used in patient care, and it’s the first study of its kind that looks at how nurses use an AI system to screen patients for diabetic retinopathy.
Over a period of eight months, we made regular visits to 11 clinics. At each clinic, we observed how diabetes nurses handle eye screenings, and we interviewed them to understand how to refine this technology. We did our field research alongside a study to evaluate the feasibility and performance of the deep learning system in the clinic, with patients who agreed to be carefully observed and medically supervised during the study.
The observational process
In our research, we provide key recommendations for continued product development, and provide guidance on deploying AI in real-world scenarios for other research projects.
Developing new products with a user-centered design process requires involving the people who would interact with the technology early in development. This means getting a deep understanding of people’s needs, expectations, values and preferences, and testing ideas and prototypes with them throughout the entire process. When it comes to AI systems in healthcare, we pay special attention to the healthcare environment, current workflows, system transparency, and trust.
The impact of environment on AI
In addition to these factors, our fieldwork found that we must also factor in environmental differences like lighting, which vary among clinics and can impact the quality of images. Just as an experienced clinician might know how to account for these variables in order to assess it, AI systems also need to be trained to handle these situations.
For instance, some images captured in screening might have issues like blurs or dark areas. An AI system might conservatively call some of these images “ungradable” because the issues might obscure critical anatomical features that are required to provide a definitive result. For clinicians, the gradability of an image may vary depending on one’s own clinical set-up or experience. Building an AI tool to accommodate this spectrum is a challenge, as any disagreements between the system and the clinician can lead to frustration. In response to our observations, we amended the research protocol to have eye specialists review such ungradable images alongside the patient’s medical records, instead of automatically referring patients with ungradable images to an ophthalmologist. This helped to ensure a referral was necessary, and reduced unnecessary travel, missed work, and anxiety about receiving a possible false positive result.
Finally, alongside evaluating the performance, reliability, and clinical safety of an AI system, the study also accounts for the human impacts of integrating an AI system into patient care. For example, the study found that the AI system could empower nurses to confidently and immediately identify a positive screening, resulting in quicker referrals to an ophthalmologist.
So what does all of this mean?
Deploying an AI system by considering a diverse set of perspectives in the design and development process is just one part of introducing new health technology that requires human interaction. It’s important to also study and incorporate real-life evaluations in the clinic, and engage meaningfully with clinicians and patients, before the technology is widely deployed. That’s how we can best inform improvements to the technology, and how it is integrated into care, to meet the needs of clinicians and patients.