Measuring abstract reasoning in neural networks
Neural network-based models continue to achieve impressive results on longstanding machine learning problems, but establishing their capacity to reason about abstract concepts has proven difficult. Building on previous efforts to solve this important feature of general-purpose learning systems, our latest paper sets out an approach for measuring abstract reasoning in learning machines, and reveals some important insights about the nature of generalisation itself.Read More
Related Google News:
- Android Neural Networks API 1.3 and PyTorch Mobile support November 12, 2020
- Measuring Gendered Correlations in Pre-trained NLP Models October 14, 2020
- Neural Structured Learning in TFX October 9, 2020
- SRE Classroom: exercises for non-abstract large systems design September 23, 2020
- Traffic prediction with advanced Graph Neural Networks September 3, 2020
- Fast Supernovae Detection using Neural Networks September 1, 2020
- Layerwise learning for Quantum Neural Networks August 10, 2020
- SpineNet: A Novel Architecture for Object Detection Discovered with Neural Architecture Search June 30, 2020
July 11, 2018
AI / Google
Copyright 2020