LogoAIAny
  • Search
  • Collection
  • Category
  • Tag
  • Blog
LogoAIAny

Tag

Explore by tags

LogoAIAny

Learn Anything about AI in one site.

support@aiany.app
Product
  • Search
  • Collection
  • Category
  • Tag
Resources
  • Blog
Company
  • Privacy Policy
  • Terms of Service
  • Sitemap
Copyright © 2025 All Rights Reserved.
  • All

  • 30u30

  • ASR

  • ChatGPT

  • GNN

  • IDE

  • ai-agent

  • ai-coding

  • ai-image

  • ai-tools

  • ai-video

  • AIGC

  • alibaba

  • anthropic

  • audio

  • blog

  • book

  • chatbot

  • chemistry

  • course

  • deepmind

  • deepseek

  • engineering

  • foundation

  • foundation-model

  • google

  • LLM

  • math

  • NLP

  • openai

  • paper

  • physics

  • plugin

  • RL

  • science

  • translation

  • tutorial

  • vibe-coding

  • video

  • vision

  • xAI

Computing Machinery and Intelligence

1950
Alan Turing

This is a seminal paper written by Alan Turing on the topic of artificial intelligence. The paper, published in 1950 in Mind, was the first to introduce his concept of what is now known as the Turing test to the general public.

paperfoundation

The perceptron: a probabilistic model for information storage and organization in the brain

1958
Frank Rosenblatt

Frank Rosenblatt’s 1958 paper introduced the perceptron, a probabilistic model mimicking neural connections for learning and pattern recognition, laying the mathematical and conceptual groundwork for modern neural networks and sparking decades of research in artificial intelligence, despite its early limitations and later critiques.

paperfoundation

Learning Internal Representations by Error Propagation

1985
David E. Rumelhart, Geoffrey E. Hinton +1

This paper introduces the generalized delta rule, a learning procedure for multi-layer networks with hidden units, enabling them to learn internal representations. This rule implements a gradient descent method to minimize the error between the network's output and a target output by propagating error signals backward through the network. The authors demonstrate through simulations on various problems, such as XOR and parity, that this method, often called backpropagation, can discover complex internal representations and solutions. They show it overcomes previous limitations in training such networks and rarely encounters debilitating local minima.

paperfoundation

Keeping NN Simple by Minimizing the Description Legnth of the Weights

1993
Geoffrey E. Hinton, Drew van Camp

This paper proposes minimizing the information content in neural network weights to enhance generalization, particularly when training data is scarce. It introduces a method where adaptable Gaussian noise is added to the weights, balancing the expected squared error against the amount of information the weights contain. Leveraging the Minimum Description Length (MDL) principle and a "bits back" argument for communicating these noisy weights, the approach enables efficient derivative computations, especially if output units are linear. The paper also explores using adaptive mixtures of Gaussians for more flexible prior distributions for weight coding. Preliminary results indicated a slight improvement over simple weight-decay on a high-dimensional task.

foundation30u30paper

A Tutorial Introduction to the Minimum Description Length Principle

2004
Peter Grunwald

This paper gives a concise tutorial on MDL, unifying its intuitive and formal foundations and inspiring widespread use of MDL in statistics and machine learning.

foundation30u30papermath

Pattern Recognition and Machine Learning

2006
Christopher M. Bishop

The book coveris probabilistic approaches to machine learning, including Bayesian networks, graphical models, kernel methods, and EM algorithms. It emphasizes a statistical perspective over purely algorithmic approaches, helping formalize machine learning as a probabilistic inference problem. Its clear mathematical treatment and broad coverage have made it a standard reference for researchers and graduate students. The book’s impact lies in shaping the modern probabilistic framework widely used in fields like computer vision, speech recognition, and bioinformatics, deeply influencing the development of Bayesian machine learning methods.

foundationbook

The Elements of Statistical Learning

2009
Trevor Hastie, Robert Tibshirani +1

The book unifies key machine learning and statistical methods — from linear models and decision trees to boosting, support vector machines, and unsupervised learning. Its clear explanations, mathematical rigor, and practical examples have made it a cornerstone for researchers and practitioners alike. The book has deeply influenced both statistics and computer science, shaping how modern data science integrates theory with application, and remains a must-read reference for anyone serious about statistical learning and machine learning.

foundationbook

Machine Super Intelligence by Shane Legg

2011
Shane Legg

This book develops a formal theory of intelligence, defining it as an agent’s capacity to achieve goals across computable environments and grounding the concept in Kolmogorov complexity, Solomonoff induction and Hutter’s AIXI framework.It shows how these idealised constructs unify prediction, compression and reinforcement learning, yielding a universal intelligence measure while exposing the impracticality of truly optimal agents due to incomputable demands. Finally, it explores how approximate implementations could trigger an intelligence explosion and stresses the profound ethical and existential stakes posed by machines that surpass human capability.

foundation30u30book

The First Law of Complexodynamics

2011
Scott Aaronson

This post explores why physical systems’ “complexity” rises, peaks, then falls over time, unlike entropy, which always increases. Using Kolmogorov complexity and the notion of “sophistication,” the author proposes a formal way to capture this pattern, introducing the idea of “complextropy” — a complexity measure that’s low in both highly ordered and fully random states but peaks during intermediate, evolving phases. He suggests using computational resource bounds to make the measure meaningful and proposes both theoretical and empirical (e.g., using file compression) approaches to test this idea, acknowledging it as an open problem.

foundationblog30u30tutorial

Machine Learning: A Probabilistic Perspective

2012
Kevin P. Murphy

Th book offers a comprehensive, mathematically rigorous introduction to machine learning through the lens of probability and statistics. Covering topics from Bayesian networks to graphical models and deep learning, it emphasizes probabilistic reasoning and model uncertainty. The book has become a cornerstone text in academia and industry, influencing how researchers and practitioners think about probabilistic modeling. It’s widely used in graduate courses and cited in numerous research papers, shaping a generation of machine learning experts with a solid foundation in probabilistic approaches.

foundationbook

ImageNet Classification with Deep Convolutional Neural Networks

2012
Alex Krizhevsky, Ilya Sutskever +1

The 2012 paper “ImageNet Classification with Deep Convolutional Neural Networks” by Krizhevsky, Sutskever, and Hinton introduced AlexNet, a deep CNN that dramatically improved image classification accuracy on ImageNet, halving the top-5 error rate from \~26% to \~15%. Its innovations — like ReLU activations, dropout, GPU training, and data augmentation — sparked the deep learning revolution, laying the foundation for modern computer vision and advancing AI across industries.

vision30u30paperfoundation

Playing Atari with Deep Reinforcement Learning

2013
Volodymyr Mnih, Koray Kavukcuoglu +5

The paper by DeepMind introduced Deep Q-Networks (DQN), the first deep learning model to learn control policies directly from raw pixel input using reinforcement learning. By combining Q-learning with convolutional neural networks and experience replay, DQN achieved superhuman performance on several Atari 2600 games without handcrafted features or game-specific tweaks. Its impact was profound: it proved deep learning could master complex tasks with sparse, delayed rewards, catalyzing the modern wave of deep reinforcement learning research and paving the way for later breakthroughs like AlphaGo.

RLdeepmindpaper
  • Previous
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • Next