...
Karl Friston may be providing conceptual tools, but each of us (including AI programmers) must do the hard work of applying these concepts to the reality of our lives, including the application of the concept of 'Embodied Cognition' which is closely associated with 'mindfulness' that is grounded by actual stimuli/input from outside our Markov Blankets (whether the Markov Blanket represented by our bodies, or otherwise).
...
The reason that I mentioned AI development incorporating 'free energy', and 'active inference', concepts is because I suspect that more people (including those reading this thread) will not know how to apply such concepts into their daily activities without significant input from an AI agent. Furthermore, I note that 'free energy' is the 'evidence lower bound' that is optimized by a large portion of today's machine learning algorithms (e.g. see the first linked reference on 'deep active inference'), and thus for 'wicked problems' like climate change quantum computing would likely need to be combined with deep learning (see the second linked reference on 'Bayesian Deep Learning on a Quantum Computer').
It's also worth noting that 'predictive coding' - a dominant paradigm in neuroscience - is a form of free energy minimization. Moreover, free energy minimization (as predictive coding) approximates the backpropagation algorithm (see the second linked reference on, but in a biologically plausible fashion. In fact, most biologically plausible deep learning approaches use some form of prediction error signal, and are therefore functionally akin to predictive coding. Which is to say that the notion of free energy minimization is somewhat commonplace in both neuroscience and machine learning, but that both quantum deep learning (QDL) and quantum reinforcement learning (QRL) will need to be applied to solve challenging problems (see the third and fourth linked references).
KAI UELTZHOFFER (2018), "DEEP ACTIVE INFERENCE", arXiv:1709.02341v5
https://arxiv.org/pdf/1709.02341.pdfAbstract. "This work combines the free energy principle from cognitive neuroscience and the ensuing active inference dynamics with recent advances in variational inference on deep generative models and evolution strategies as efficient large-scale black-box optimization technique, to introduce the "deep active inference" agent. This agent tries to minimize a variational free energy bound on the average surprise of its sensations, which is motivated by a homeostatic argument. It does so by changing the parameters of its generative model, together with a variational density approximating the posterior distribution over latent variables, given its observations, and by acting on its environment to actively sample input that is likely under its generative model. The internal dynamics of the agent are implemented using deep neural networks, as used in machine learning, and recurrent dynamics, making the deep active inference agent a scalable and very flexible class of active inference agents. Using the mountaincar problem, we show how goal-directed behaviour can be implemented by defining sensible prior expectations on the latent states in the agent's model, that it will try to fulfil. Furthermore, we show that the deep active inference agent can learn a generative model of the environment, which can be sampled from to understand the agent's beliefs about the environment and its interaction with it."
&
Zhikuan Zhao, Alejandro Pozas-Kerstjens, Patrick Rebentrost, Peter Wittek (2018), "Bayesian Deep Learning on a Quantum Computer", arXiv:1806.11463
https://arxiv.org/abs/1806.11463Abstract: "Bayesian methods in machine learning, such as Gaussian processes, have great advantages compared to other techniques. In particular, they provide estimates of the uncertainty associated with a prediction. Extending the Bayesian approach to deep architectures has remained a major challenge. Recent results connected deep feedforward neural networks with Gaussian processes, allowing training without backpropagation. This connection enables us to leverage a quantum algorithm designed for Gaussian processes and develop a new algorithm for Bayesian deep learning on quantum computers. The properties of the kernel matrix in the Gaussian process ensure the efficient execution of the core component of the protocol, quantum matrix inversion, providing an at least polynomial speedup over the classical algorithm. Furthermore, we demonstrate the execution of the algorithm on contemporary quantum computers and analyze its robustness with respect to realistic noise models."
&
Thomas Fösel, Petru Tighineanu, Talitha Weiss, Florian Marquardt. Reinforcement Learning with Neural Networks for Quantum Feedback. Physical Review X, 2018; 8 (3) DOI: 10.1103/PhysRevX.8.031084
https://journals.aps.org/prx/abstract/10.1103/PhysRevX.8.031084Abstract: "Machine learning with artificial neural networks is revolutionizing science. The most advanced challenges require discovering answers autonomously. In the domain of reinforcement learning, control strategies are improved according to a reward function. The power of neural-network-based reinforcement learning has been highlighted by spectacular recent successes such as playing Go, but its benefits for physics are yet to be demonstrated. Here, we show how a network-based “agent” can discover complete quantum-error-correction strategies, protecting a collection of qubits against noise. These strategies require feedback adapted to measurement outcomes. Finding them from scratch without human guidance and tailored to different hardware resources is a formidable challenge due to the combinatorially large search space. To solve this challenge, we develop two ideas: two-stage learning with teacher and student networks and a reward quantifying the capability to recover the quantum information stored in a multiqubit system. Beyond its immediate impact on quantum computation, our work more generally demonstrates the promise of neural-network-based reinforcement learning in physics."
V. Dunjko, J. M. Taylor, H. J. Briegel (2018), "Advances in Quantum Reinforcement Learning", IEEE SMC, Banff, AB, pp. 282-287, DOI: 10.1109/SMC.2017.8122616
https://arxiv.org/abs/1811.08676Abstract: "In recent times, there has been much interest in quantum enhancements of machine learning, specifically in the context of data mining and analysis. Reinforcement learning, an interactive form of learning, is, in turn, vital in artificial intelligence-type applications. Also in this case, quantum mechanics was shown to be useful, in certain instances. Here, we elucidate these results, and show that quantum enhancements can be achieved in a new setting: the setting of learning models which learn how to improve themselves -- that is, those that meta-learn. While not all learning models meta-learn, all non-trivial models have the potential of being "lifted", enhanced, to meta-learning models. Our results show that also such models can be quantum-enhanced to make even better learners. In parallel, we address one of the bottlenecks of current quantum reinforcement learning approaches: the need for so-called oracularized variants of task environments. Here we elaborate on a method which realizes these variants, with minimal changes in the setting, and with no corruption of the operative specification of the environments. This result may be important in near-term experimental demonstrations of quantum reinforcement learning.
&
Next, I note that while most forecasts indicate that general purpose commercial quantum computers are about 10-years +/- 5-years away, the next linked articles indicate the D-Wave system already offers a development platform that allows programmers to used classical algorithms together with their quantum annealing commercial computers. Also, I note that while the D-Wave commercial quantum computers are currently not general purpose, that are currently working on another development platform that will allow programmers to address general purpose problems.
Title: "TechRepublic: D-Wave releases development kit for hybrid quantum-classical applications"
https://www.dwavesys.com/media-coverage/techrepublic-d-wave-releases-development-kit-hybrid-quantum-classical-applicationshttps://www.techrepublic.com/article/d-wave-releases-development-kit-for-hybrid-quantum-classical-applications/Extract: "D-Wave Systems announced D-Wave Hybrid—an open-source platform for developing hybrid quantum-classical applications—on Monday, at the Quantum for Business conference in Mountain View, CA. The new development platform gives programmers the ability to more easily use classical and quantum computers in parallel, without requiring knowledge of quantum mechanics to get started."
&
Lastly, I provide the following linked to a Wikipedia article on quantum machine learning, that contains relevant, but somewhat dated information.
Title: "Quantum machine learning"
https://en.wikipedia.org/wiki/Quantum_machine_learningExtract: "Quantum machine learning is an emerging interdisciplinary research area at the intersection of quantum physics and machine learning. The most common use of the term refers to machine learning algorithms for the analysis of classical data executed on a quantum computer."
Edit: For what it is worth, I provide the first image that illustrates the Policy Network and Value Network (used in machine learning for AlphaGo), and the second image that uses the Prisoner's Dilemma as a very simple example of strategies used to address 'wicked problems' like trying to limit anthropogenic GHG emissions.