Between 2015 and 2020, more than 1.5 billion people began using the internet for the first time. Another billion more are set to join them online by 2025.
Most of these new internet users come from Asia, Latin America and Africa. They experience the internet differently from those who came before them—connecting on their phones and adopting new apps and tools incredibly quickly. More and more, it’s their needs and ideas that are shaping the future of technology, in areas from financial inclusion to language translation.
Today, though, new internet users face their biggest challenge—the impact of COVID-19. How we help them get through it will go a long way towards ensuring the recovery from the pandemic is inclusive and sustainable.
A half-decade of change
Without question, the internet is more accessible and democratic than it was in 2015. Data costs have plummeted, helping the number of smartphone owners reach more than three billion people. The proportion of non-English speakers using the internet has reached three quarters of the global total, and people around the world are increasingly using video and voice as their tools to find information and services online.
For Google, our work building for new users has helped us build better for everyone. Since we launched the Next Billion Users initiative five years ago, it’s led to breakthroughs we wouldn’t otherwise have made—from offline modes in YouTube and Maps, to AI that can help kids read in multiple languages, apps that protect privacy on shared devices, and the new user experience in Google Pay (first launched in India and soon coming to the rest of the world). We’re also sharing open-source tools and guidelines to help others, because we know that supporting new users is a shared goal.
Over the past-half decade, the technology industry has made meaningful progress in closing digital divides, helping millions more people a week share in the benefits technology creates. Yet as the pandemic increases the importance of technology in our lives, work, education and health, the risk is that this progress will slow or, worse, reverse.
The impact of COVID-19
We asked new internet users how the coronavirus has affected them, and many told us it’s added to pressures they already face. At a time when essential services are increasingly moving online, it’s becoming harder and harder for new users to access the internet in the first place.
The combination of fewer jobs, lower income and higher prices means they’re forced to ration their data. Food and shelter have to take priority—and with more people at home, even when data is available, it tends to be spread thinly across multiple family members.
On top of that, a lack of digital literacy means new users often struggle to take advantage of government financial aid, community resources or schooling. And when it comes to the virus itself, many are finding it hard to separate fact from misinformation, or to find reliable healthcare options.
Not surprisingly, all this is taking a toll on new internet users’ sense of emotional wellbeing, interrupting their support systems and forcing them to put some of their aspirations on hold.
How we help new users from here: economy, education, ecosystem
Countering the impact of the virus by helping new users through and beyond COVID should be a priority for industry, governments, international organizations and nonprofits.
First, we have to make sure new users have easy-to-use tools that meet their immediate economic needs.
We recognise Google’s responsibility in this. Apps like Kormo Jobs in Bangladesh, India and Indonesia — which connects people to entry level jobs—are already playing a role helping people find work. In the coming months, we’ll be experimenting with a new Google product that can provide additional earning opportunities through crowdsourcing, recognising that for most new internet users, protecting income is the first priority.
Second, we have to increase our focus on education—helping new users better understand online information and services, and adapt to deeper changes like the rise of online education.
Grassroots, nonprofit-led literacy initiatives like those Google.org is supporting in Southeast Asia are important steps in the right direction. So too are the Google News Initiative’s partnerships throughout Latin America, and Grow with Google’s global programs like Be Internet Awesome, which promotes online safety and confidence for kids. It’s critical that we build on these programs in the aftermath of the pandemic.
Third, we have to keep building a supportive ecosystem around new users. We should aspire for every organization that owns or builds technology to prioritize inclusion.
Too often, the responsibility for helping new users get online falls to ‘informal teachers’, the friends and family around them. Initiatives like the Design Toolkit for Digital Confidence show how we can begin to change that, equipping technology-makers to build tools that are intuitive for everyone, no matter what their circumstances.
Finally, we have to keep advancing the work that led Google to create the NBU initiative in 2015: ensuring the internet and the devices and the tools it supports are helpful and accessible to more people, in more languages and more ways (including for those living with disabilities).
COVID-19 is a challenge for everyone, and it’s hitting new internet users especially hard. But if governments, businesses and civil society organizations work together, we can and should make the internet better and more inclusive in the post-COVID world, for the billions online today, and the next billion to come.
Why it’s important
- This change will apply to all compatible Office file types, including .docx, .doc, .ppt, .pptx, .xls, .xlsx, .xlsm
- Password protected Office files will not open directly in Office editing mode. These files will continue to open in Preview mode.
- If the “Office Editing for Docs, Sheets & Slides” Chrome extension is installed, we will redirect to the extension and not to Docs, Sheets, or Slides. This is the same as if you select “Open with” today.
- Admins: This feature will be ON by default. There is no admin control for this feature.
- End users: This change will take place by default when opening compatible Office files in Drive on the web. You can still use the preview mode by right clicking the file and clicking “Preview,” or by pressing ‘P’ on the keyboard while double clicking the file. Visit the Help Center to learn more about working with Office files in Drive.
- Available to Essentials, Business Starter, Business Standard, Business Plus, Enterprise Essentials, Enterprise Standard, Enterprise Plus, Education, Enterprise for Education, and Nonprofits customers and users with personal Google Accounts
Docker Hub is a popular registry for hosting public container images. Earlier this summer, Docker announced it will begin rate-limiting the number of pull requests to the service by “Free Plan” users. For pull requests by anonymous users this limit is now 100 pull requests per 6 hours; authenticated users have a limit of 200 pull requests per 6 hours. When the new rate limits take effect on November 1st, they might disrupt your automated build and deployment processes on Cloud Build or how you deploy artifacts to Google Kubernetes Engine (GKE), Cloud Run or App Engine Flex from Docker Hub.
This situation is made more challenging because, in many cases, you may not be aware that a Google Cloud service you are using is pulling images from Docker Hub. For example, if your Dockerfile has a statement like “
FROM debian:latest”or your Kubernetes Deployment manifest has a statement like “
image: postgres:latest” it is pulling the image directly from Docker Hub. To help you identify these cases, Google Cloud has prepared a guide with instructions on how to scan your codebase and workloads for container image dependencies from third-party container registries, like Docker Hub.
We are committed to helping you run highly reliable workloads and automation processes. In the rest of the blog post, we’ll discuss how these new Docker Hub pull rate limits may affect your deployments running on various Google Cloud services, and strategies for mitigating against any potential impact. Be sure to check back often, as we will update this post regularly.
Impact on Kubernetes and GKE
One of the groups that may see the most impact from these Docker Hub changes is users of managed container services. Like it does for other managed Kubernetes platforms, Docker Hub treats GKE as an anonymous user by default. This means that unless you are specifying Docker Hub credentials in your configuration, your cluster is subject to the new throttling of 100 image pulls per six hours, per IP. And many Kubernetes deployments on GKE use public images. In fact, any container name that doesn’t have a container registry prefix such as
gcr.io is pulled from Docker Hub. Examples include
Container Registry hosts a cache of the most requested Docker Hub images from Google Cloud, and GKE is configured to use this cache by default. This means that the majority of image pulls by GKE workloads should not be affected by Docker Hub’s new rate limits. Furthermore, to remove any chance that your images would not be in the cache in the future, we recommend that you migrate your dependencies into Container Registry, so that you can pull all your images from a registry under your control.
In the interim, to verify whether or not you are affected, you can generate a list of DockerHub images your cluster consumes:
You may want to know if the images you use are in the cache. The cache will change frequently but you can check for current images via a simple command:
It is impractical to predict cache hit-rates, especially in times where usage will likely change dramatically. However, we are increasing cache retention times to ensure that most images that are in the cache stay in the cache.
GKE nodes also have their own local disk cache, so when reviewing your usage of DockerHub, you only need to count the number of unique image pulls (of images not in our cache) made from GKE nodes:
For private clusters, consider the total number of such image pulls across your cluster (as all image pulls will be routed via a single NAT gateway).
For public clusters you have a bit of extra breathing room, as you only need to consider the number of unique image pulls on a per-node basis. For public nodes, you would need to churn through more than 100 unique public uncached images every 6 hours to be impacted, which is fairly uncommon.
If you determine that your cluster may be impacted, you can authenticate to DockerHub by adding
imagePullSecrets with your Docker Hub credentials to every Pod that references a container image on Docker Hub.
While GKE is one of the Google Cloud services that may see an impact from the Docker Hub rate limits, any service that relies on container images may be affected, including similar Cloud Build, Cloud Run, App Engine, etc.
Finding the right path forward
Upgrade to a paid Docker Hub account
Arguably, the simplest—but most expensive—solution to Docker Hub’s new rate limits is to upgrade to a paid Docker Hub account. If you choose to do that and you use Cloud Build, Cloud Run on Anthos, or GKE, you can configure the runtime to pull with your credentials. Below are instructions for how to configure each of these services:
Switch to Container Registry
Another way to avoid this issue is to move any container artifacts you use from Docker Hub to Container Registry. Container Registry stores images as Google Cloud Storage objects, allowing you to incorporate container image management as part of your overall Google Cloud environment. More to the point, opting for a private image repository for your organization puts you in control of your software delivery destiny.
To help you migrate, the above-mentioned guide also provides instructions on how to copy your container image dependencies from Docker Hub and other third-party container image registries to Container Registry. Please note that these instructions are not exhaustive—you will have to adjust them based on the structure of your codebase.
Additionally, you can use Managed Base Images, which are automatically patched by Google for security vulnerabilities, using the most recent patches available from the project upstream (for example, GitHub). These images are available in the GCP Marketplace.
Here to help you weather the change
The new rate limits on Docker Hub pull requests will have a swift and significant impact on how organizations build and deploy container-based applications. In partnership with the Open Container Initiative (OCI), a community devoted to open industry standards around container formats and runtimes, we are committed to ensuring that you weather this change as painlessly as possible.
2020 has brought with it some tremendous innovations in the area of cloud security. As cloud deployments and technologies have become an even more central part of organizations’ security program, we hope you’ll join us for the latest installment of our Google Cloud Security Talks, a live online event on November 18th, where we’ll help you navigate the latest thinking in cloud security.
We’ll share expert insights into our security ecosystem and cover the following topics
Sunil Potti and Rob Sadowski will open the digital event with our latest Google Cloud security announcements.
This will be followed by a panel discussion with Dave Hannigan and Jeanette Manfra from Google Cloud’s Office of the CISO on how cloud migration is a unique opportunity to dismantle the legacy security debt of the past two decades.
Kelly Waldher and Karthik Lakshminarayan will talk about the new Google Workspace and how it can enable users to access data safely and securely while preserving individual trust and privacy.
We will present our vision of network security in the cloud with Shailesh Shukla and Peter Blum, where we’ll talk about the recent innovations that are making network security in the cloud powerful but invisible, protecting infrastructure and users from cyber attacks.
Sam Lugani and Ibrahim Damlaj will do a deeper dive on Confidential Computing, and more specifically Confidential GKE Nodes and how they can add another layer of protection for containerized workloads.
You will also learn how Security Command Center can help you identify misconfigurations in your virtual machines, containers, network, storage, and identity and access management policies as well vulnerabilities in your web applications, with Kathryn Shih and Timothy Peacock.
Anton Chuvakin and Seth Vargo will talk about the differences between key management and secret management to help you choose the best security controls for your use cases.
Finally, we will host the Google Cloud Security Showcase, a special segment where we’ll focus on a few security problems and show how we’ve recently helped customers solve them using the tools and products that Google Cloud provides.
We look forward to sharing our latest security insights and solutions with you. Sign-up now to reserve your virtual seat.
Posted by Erica Hanson, Global Program Manager, Google Developer Student Clubs
Stevens Institute of Technology’s Google Developer Student Club. Names left to right: Tim Leonard, Will Escamilla, Rich Bilotti, Justin O’Boyle, Luke Mizus, and Rachael Kondra
The Google Developer Student Club at the Stevens Institute of Technology built their own website that makes local government data user friendly for voters in local districts. The goal: Take obscure budget and transportation information, display it via an easy-to-understand UI, and help voters become more easily informed.
When Tim Leonard first moved to Hoboken, New Jersey to start school at the Stevens Institute of Technology, he was interested in anything but government. A computer science major with a deep interest in startups, one was more likely to find him at a lecture on computational structures than on political science.
However, as the founder of the Google Developer Student Club (DSC) chapter at his university, Tim and his fellow classmates had the opportunity to make the trip into New York City to attend a developer community meetup with Ralph Yozzo, a community organizer from Google Developer Groups (GDG) NYC. While Ralph had given several talks on different technologies and programming techniques, this time he decided to try something new: Government budgets.
A slide from Ralph’s presentation
Titled “Why we should care about budgets,” Ralph’s talk to the young programmers focused on why tracking government spending in their community matters. He further explained how public budgets fund many parts of our lives – from getting to work, to taking care of our health, to going to a good school. However, Ralph informed them that while there are currently laws that attempt to make this data public, a platform that makes this information truly accessible didn’t exist. Instead, most of this information is tucked away in different corners of the internet; unorganized, and hard to understand.
Tim soon realized programming could be the solution and that his team had the chance to grow in a whole new way, outside of the traditional classroom setting. With Ralph’s encouragement, Tim and his team started thinking about how they could build a platform to collect all of this data, and provide a UI that’s easy for any user to interact with. By creating a well-organized website that could pull all of this local information, streamline it, and produce easy-to-understand graphics, the DSC Stevens team imagined they could have an impact on how voters inform themselves before casting their ballots at local elections.
“What if we had a technical approach to local government? Where our site would have actionable metrics that held us accountable for getting information out to the public.”
Tim thought if local voters could easily understand how their representatives were spending their community’s money, they could use it as a new framework to decide how to vote. The next step was to figure out the best way to get started.
An image from the demo site
The DSC Stevens team quickly agreed that their goal should be to build a website about their own city, Hoboken. They named it “Project Crystal” and started taking Google App Engine courses and conducting Node.js server run throughs. With the data they would eventually store and organize, they also dove into Google Cloud demos and workshops on Google Charts. They were determined to build something that would store public information in a different way.
“Bounce rates and click through metrics ensure we evaluate our site like a startup. Instead of selling a product, our platform would focus on getting people to interact with the data that shapes their everyday lives.”
After participating in different courses on how to use Google Cloud, Maps, and Charts, they finally put it all together and created the first version of their idea – an MVP site, built to drive user engagement, that would serve as their prototype.
A video explaining the Project Crystal website
Complete with easy-to-understand budget charts, contact information for different public officials, and maps to help users locate important services, the prototype site has been their first major step in turning complicated data into actionable voting information. Excited about their progress, Tim wants to eventually host the site on Google Cloud so his team can store more data and offer the platform to local governments across the country.
Image of the DSC Steven’s team adding Google Charts to their demo site
The DSC Stevens team agrees, access to resources like Project Crystal could change how we vote. They hope with the right technical solutions around data, voters will be better informed, eager to ask more of their representatives, and more willing to participate in the day-to-day work of building their communities, together.
“Our advice to other student developers is to find outlets, like DSC, that enable you to think about helping others. For us, it was figuring out how to use our Google Cloud credits for good.”
Why you’d use it
- An accelerated Support experience, available 24/7. For Priority 1 cases, customers can expect a first meaningful response within one hour; for Priority 2 cases, they can expect a response in four hours.
- Intelligent triaging. This ensures that cases are routed directly to technical experts who have advanced product knowledge and training, as well as additional tools to provide complete support.
- Third-Party Technology Support: Enhanced Support can help customers leverage the many third-party integrations available on Google Workspace. This includes, assistance with application set-up, configuration, and troubleshooting.
- Standard Support: Included with Business Starter, Business Standard, and Business Plus editions, Standard Support offers 24/7 technical support, with a four-hour response time for the highest priority cases.
- Enhanced Support: Our new offering, Enhanced Support comes with faster support (a one-hour response time), intelligent triaging, and Third-Party Technology Support.
- Premium Support: Launched earlier this year, Premium Support offers the fastest response time, 15 minutes for P1 cases, a named Technical Account Manager, and additional support functions.
- Admins: Enhanced Support is included with Enterprise Essentials, Enterprise Standard, and Enterprise Plus Editions. It is available for purchase by Business Standard and Business Plus customers.
- Included with Enterprise Essentials, Enterprise Standard, and Enterprise Plus editions
- Available as an upgrade for Business Standard and Business Plus editions.
In recent years, stateless middle-tiers have been touted as a simple way to achieve horizontal scalability. But the rise of microservices has pushed the limits of the stateless architectural pattern, causing developers to look for alternatives.
Stateless middle-tiers have been a preferred architectural pattern because they helped with horizontal scaling by alleviating the need for server affinity (aka sticky sessions). Server affinity made it easy to hold data in the middle-tier for low-latency access and easy cache invalidation. The stateless model pushed all “state” out of the middle-tier into backing data stores. In reality the stateless pattern just moved complexity and bottlenecks to that backing data tier. The growth of microservice architectures exacerbated the problem by putting more pressure on the middle tier, since technically, microservices should only talk to other services and not share data tiers. All manners of bailing wire and duct tape have been employed to overcome the challenges introduced by these patterns. New patterns are now emerging which fundamentally change how we compose a system from many services running on many machines.
To take an example, imagine you have a fraud detection system. Traditionally the transactions would be stored in a gigantic database and the only way to perform some analysis on the data would be to periodically query the database, pull the necessary records into an application, and perform the analysis. But these systems do not partition or scale easily. Also, they lack the ability for real-time analysis. So architectures shifted to more of an event-driven approach where transactions were put onto a bus where a scalable fleet of event-consuming nodes could pull them off. This approach makes partitioning easier, but it still relies on gigantic databases that received a lot of queries. Thus, event-driven architectures often ran into challenges with multiple systems consuming the same events but at different rates.
Another (we think better) approach, is to build an event-driven system that co-locates partitioned data in the application tier, while backing the event log in a durable external store. To take our fraud detection example, this means a consumer can receive transactions for a given customer, keep those transactions in memory for as long as needed, and perform real-time analysis without having to perform an external query. Each consumer instance receives a subset of commands (i.e., add a transaction) and maintains its own “query” / projection of the accumulated state. For instance:
By separating commands and queries we can easily achieve end-to-end horizontal scaling, fault tolerance, and microservice decoupling. And with the data being partitioned in the application tier we can easily scale that tier up and down based on the number of events or size of data, achieving serverless operations.
Making it work with Cloudstate
This architecture is not entirely uncommon, going by the names Event Sourcing, Command Query Response Segregation (CQRS), and Conflict-Free Replicated Data Types. (Note: for a great overview of this see a presentation titled “Cloudstate – Towards Stateful Serverless” by Jonas Bonér.) But until now, it’s been pretty cumbersome to build systems with these architectures due to primitive programming and operational models. The new Cloudstate open-source project attempts to change that by building more approachable programming and operational models.
Cloudstate’s programming model is built on top of protocol buffers (protobufs) which enable evolvable data schemas and generated service interaction stubs. When it comes to data schemas, protobufs allow you to add fields to event / message objects without breaking systems that are still using older versions of those objects. Likewise, with the gRPC project, protobufs can be automatically wrapped with client and server “stubs” so that no code needs to be written for handling protobuf-based network communication. For example, in the fraud detection system, the protobuf might be:
The `Transaction` message contains the details about a transaction and the `user_id` field enables automatic sharding of data based on the user.
Cloudstate adds support for event sourcing on top of this foundation so developers can focus on just the commands and accumulated state that a given component needs. For our fraud detection example, we can simply define a class / entity to hold the distributed state and handle each new transaction. You can use any language, but we use Kotlin, a Google-sponsored language that extends Java.
With the exception of a tiny bit of bootstrapping code, that’s all you need to build an event-sourced system with Cloudstate!
The operational model is also just as delightful since it is built on Kubernetes and Knative. First you need to containerize the service. For JVM-based builds (Maven, Gradle, etc.) you can do this with Jib. In our example we use Gradle and can just run:
This creates a container image for the service and stores it on the Google Container Registry. To run the Cloudstate service on your own Kubernetes / Google Kubernetes Engine (GKE) cluster, you can use the Cloudstate operator and a deployment descriptor such as:
There you have it—a scalable, distributed event-sourced service!
And if you’d rather not manage your own Kubernetes cluster, then you can also run your Cloudstate service in the Akka Serverless managed environment, provided by Lightbend, the company behind Cloudstate.
To deploy the Cloudstate service on Lightbend Cloudstate simply run:
It’s that easy! Here is a video that walks through the full fraud detection sample:
You can find the source for the sample on GitHub: github.com/jamesward/cloudstate-sample-fraud
Akka Serverless under the hood
As an added bonus, Akka Serverless itself is built on Google Cloud. To deliver this stateful serverless cloud service on Google Cloud, Cloudstate needs a distributed durable store for messages. With the open-source Cloudstate you can use PostgreSQL or Apache Cassandra. The managed Akka Serverless service is built on Google Cloud Spanner due to its global scale and high throughput. Lightbend also chose to build their workload execution on GKE to take advantage of its autoscaling and security features.
Together, Lightbend and Google Cloud have many shared customers who have built modern, resilient, and scalable systems with Lightbend’s open source and Google’s Cloud services. So we are excited that Cloudstate brings together Lightbend and Google Cloud and we look forward to seeing what you will build with it! To get started check out the Open Source Cloudstate project and Lightbend’s Akka Serverless managed cloud service.
Roughly one billion people, or 15% of the world’s population, have some form of a disability1. Here at Google, one of our core values is providing help to our users and with that, making our products as accessible as possible. In fact, my job is dedicated to building technology that makes Chrome Browser and Chrome OS more accessible. In recognition of National Disability Employment Awareness Month in October, we wanted to highlight new and existing accessibility features that you or teams can use while they work on the web and on Chromebooks.
Before we get started, don’t forget to explore and enable accessibility features in your settings first. At the bottom right of your Chromebook, select the time, or press Alt + Shift + s. Then select “Settings.” Scroll to the bottom of the screen and choose “Advanced.” In the “Accessibility” section, select “Manage accessibility features.” If you want to enable them even quicker, turn on “Always show accessibility features” in the system menu of your Chromebook to skip some of these steps in the future.
Explore new enterprise accessibility policies
In Chrome 84, fifteen new accessibility policies were added to the Google Admin console for Chrome OS devices. They allow IT to manage these settings centrally for their users. The new policies include: ChromeVox spoken feedback, Select-to-speak, High contrast, Screen magnifier, Sticky keys, Virtual keyboard, Dictation, Keyboard focus highlight, Caret highlight, Auto-click enabled, Large cursor, Cursor highlight, Primary mouse button, Mono audio and Accessibility shortcuts. Head to the Admin console to enable them in your organization.
Change your cursor color
Another new feature allows your workforce to update the color of their cursor to improve its visibility on Chrome OS. There are seven new colors available: red, yellow, green, cyan, blue, magenta and pink, in addition to default black. To change the cursor, go to the “Mouse and touchpad” section of Settings.
Shade background text in Select-to-speak
Select-to-speak lets users select text on a specific part of their screen and have it spoken aloud—extremely helpful for folks with low vision or learning disabilities. To make it easier to focus on the spoken text, you can now shade the background text that is not being highlighted. To enable this select-to-speak feature, search for “Select-to-speak settings” within Settings.
Use Voice Switching and other ChromeVox enhancements
Recent enhancements to the ChromeVox screen reader help users with visual impairments use Chrome OS. Employees can now utilize Voice Switching which automatically changes the screen reader’s voice based on the language of the text being read. Some additional updates include: new speech customization options, Smart Sticky Mode, and improved navigation on ChromeVox menus. To start using the screen reader, check out this help center article for more info.
Export accessible PDFs through Chrome
Chrome now generates more accessible PDFs. Users can save web pages as a PDF that will include metadata like the page’s headings, lists, tables, paragraphs, and image descriptions. This makes the web more accessible for people with low vision or who are blind that use a screen reader to access PDF files.
Customize web page content and font sizes in Chrome
Chrome allows your workforce to change the size of everything on the website they visit, including text, images, and videos. It’s also possible to change only the size of the font on a web page. Read this help center article to learn how.
Zoom and magnify your screen
Employees with limited vision can use greater levels of zoom to make the screen more visible. They can also magnify their entire screen or specific parts of your screen. For more information on both of these options, read this help center article.
Leverage extensions to improve accessibility
There are many extensions that can also help users with disabilities enjoy the web. Check out this list of extensions for other tools that can alter Chrome to your users’ unique needs. As an admin, don’t forget you can manage and pre-install extensions of Chrome for your workforce.
We’re continuously making improvements to Chrome and Chrome OS to ensure our products are as accessible as possible for everyone. We encourage you to share these features with your workforce to improve accessibility and enable them to be more productive. To stay updated on accessibility news from our team, check out the Accessibility Help Center. We also offer a Disability Support team who can help answer questions about using assistive technology and accessibility features with Google products. To chat with support agents, visit g.co/disabilitysupport.
To discover other benefits that Chrome and Chrome OS can bring to your organization, visit our website.
1. Source: The World Bank, Disability Inclusion Overview, October 2020
Recently, I was awarded a Google Open Source Peer Bonus, which I’m grateful for, as it proved to me that one can contribute value to open source projects, and build a career in it, without extensive experience coding. So how can someone with limited coding skills like me contribute to open source in a meaningful way?
Documentation is important across open source and especially helpful to those who are new to a project! Developers and maintainers of projects are often focused on fixing bugs and improving the software. Therefore, documentation is harder to prioritize, so contributions to documentation are highly appreciated. Being experienced with applications won’t always help you in writing the documentation, since familiarity can cause you to miss a step when creating the doc. This is why, as a beginner, you are in an excellent position to ensure that instructions and step-by-step guides are easy to follow, don’t skip vital steps, and don’t use off-putting language.
If you have the opportunity to get involved in programs like Season of Docs as a mentor or a participant, as I did in 2019, the experience is hugely rewarding!
Events and Conferences
If you can help with mailing lists or organizing events, you can get involved in the community! In 2006, I became involved with the nascent Open Source Geospatial Foundation (OSGeo), where I was persuaded to set up a local chapter in the United Kingdom (going strong 14 years later!). It was one of the best things I could’ve done. This year we hosted a global conference (FOSS4G) and several UK events, including an online-only event. We’ve also managed to financially support a number of open source projects by providing an annual sponsorship, or by contributing to the funding of a specific improvement. I’ve met so many great people through my involvement in OSGeo, some of which have become colleagues and good friends.
|The group meeting at FOSS4G 2013 in NottinghamAdd caption|
If you’re interested in writing case studies, you can always speak about your experiences at conferences. Evidence that particular packages can be used successfully in real-world situations are incredibly valuable, and can help others put together business cases for considering an open source solution.
Sometimes the problems you face with technology can be experienced by money, and by open-sourcing your solution you could be impacting a lot of people. When I first started using open source software, the packages I needed were often hard to install and configure on Windows, having to be started using the command prompt, which can be intimidating for beginners. To scratch a problem-solving itch, I packaged them up onto a USB stick, added some batch files to make them load properly from an external drive, added a little menu for starting them, and Portable GIS was born. After 12 years, a few iterations, a website and a GitLab repository, it has been downloaded thousands of times, and is used in situations such as disaster relief, where installing lots of software rapidly on often old PCs is not really an option.
Once you are proficient in something, use your knowledge to help others. Some existing platforms for software use and development (online repositories like GitHub or GitLab) are extremely intimidating to new users, and create barriers to participation. If you can help people get over the fear-inducing first pull request, you will empower them to keep on contributing. My first pull request was a contribution to the Vaguely Rude Place-names map back in 2013 and since then I’ve run few training events along a similar line at conferences.
Open source is now fundamental to my career—16 years after learning about it—and something I am truly passionate about. It has shaped my life in many ways. I hope that my experiences might help someone who isn’t versed in code to get involved, realizing that their contributions are equally as valuable as bug fixes and patches.
Editor’s note: Jake Wood is the CEO of Team Rubicon, a Google.org grantee. Today, he talks about how their preparedness efforts help communities across the U.S. respond to natural disasters.
The idea for Team Rubicon came after I finished my two tours with the Marine Corps in 2010. The devastation from the Haiti earthquake was unfolding, and I couldn’t just stand idly by and watch. I realized there was an untapped resource in veterans like myself. Our collective knowledge could help communities recover from tornadoes, fires, floods and hurricanes like the one Haiti was reeling from.
I co-founded Team Rubicon with a vision to create a team of volunteer military veterans and first responders that could help bring immediate relief to marginalized communities recovering from disasters. Lately, we’ve been building on that vision and thinking about how we can better help communities prepare before a crisis happens.To that end, we started the Resilient Cities Initiative, which focuses on recruiting, organizing, and training thousands of veterans and volunteers across 300 metropolitan areas to respond to disasters at a local level. And thanks to Google.org’s $1 million grant last year, we were able to start expandingour Resilient Cities Initiative and scale necessary structures to train a localized and skilled volunteer base.
We expected this project to increase the resilience of cities. But 2020 gave us the opportunity to prove our hypothesis in ways we never imagined. While some had estimated this would be a record-breaking year for natural disasters, no one predicted that a pandemic would compound these crises.
This spring, Team Rubicon volunteers saw firsthand how the spread of COVID-19 destabilized communities. With restrictions on long-distance travel, local volunteers became the only solution for direct service organizations. Simultaneously, the disaster season raged on. There were tornados and derechos in the Midwest, Hurricanes Laura, Sally, and Delta in the Southeast, and fires in the West—leaving communities across the country struggling with where to start the recovery process. Thanks to support from Google, when these disasters hit during the pandemic, we already had volunteers who lived in those communities and were able to quickly and safely go out and help.
To date, we’ve managed hundreds of requests for assistance with food, personal protective equipment (PPE distribution), COVID-19 testing, storm response, and other efforts critical in alleviating the strain on local resources. Thousands of our volunteers have deployed to missions right within the communities they live in and have performed over 9,000 acts of service. Additionally, our food support operations have served more than 2.7 million meals and our volunteers have driven 122 thousand miles (equivalent to driving around the earth four times) to deliver 48 million pounds of food to the doorsteps of vulnerable residents across hundreds of cities.
While Google.org’s support helped fuel the success of this program, to us it was more than just funding. Google.org pushed us to think bigger, be bolder and gain the needed lessons to confront what we can expect to face for the foreseeable future.
In 2010 we set out with a big dream: to transform disaster response. We threw out the playbook and recruited a generation of people who’d served in some of the world’s most complex environments. Today, during a year of compounding crises, communities are turning to veterans to lead them through. That’s something we can all be proud of.