Support the Arctic Sea Ice Forum and Blog

Author Topic: Relevancy of Machine Learning to climatic models  (Read 9843 times)

Bernard

  • Frazil ice
  • Posts: 148
    • View Profile
  • Liked: 23
  • Likes Given: 24
Relevancy of Machine Learning to climatic models
« on: December 12, 2016, 04:48:04 PM »
Re-starting a conversation which forked off-topic on the 2016/2017 freezing season thread
https://forum.arctic-sea-ice.net/index.php/topic,1611.msg96388/topicseen.html#msg96388

Let me sum up the question and elements of answers so-far.

Machine Learning has provided spectacular advances some call a paradigm shift in domains as various as image and speech recognition, automatic translation and help to decision to quote a few. Ongoing research is trying to apply Machine Learning approaches to weather forecast and climate models, see e.g., http://www.climateinformatics.org/?q=node/84

Arguments mostly against this approach so far (NeilT, Archimid, epiphyte)

- Not enough quality data to feed the learning, hence Garbage In, Garbage Out most likely.
- We are entering uncharted territories, data we have are irrelevant. The machine would learn about the past only.
- Inscrutability of Machine Learning algorithms, making them difficult to be adopted as scientific method.

The latter point is certainly the most fascinating one. Enthusiastic transhumanists are ready to bleep over that one and trust inscrutable algorithms provided they are efficient. Majority of scientists certainly less so, because science is about understanding how it works, not only making "correct" decisions.

I wrote a small post a while ago which can be relevant although not specific to climate science.

https://bvatant.blogspot.fr/2016/10/i-trust-you-because-i-dont-know-why.html


DrTskoul

  • Nilas ice
  • Posts: 1455
    • View Profile
  • Liked: 210
  • Likes Given: 60
Re: Relevancy of Machine Learning to climatic models
« Reply #1 on: December 13, 2016, 03:29:49 AM »
Machine learningas it is practiced so far is used to develop patterns and trends from past data. For example if given celestial mass and trajectory data they can "discover" the laws of gravity.  However they are useless when there multiple physical involved and noisy measurements. They can discover physical correlations but cannot predict the existence of clifs. As such machine learning cannot see the future if data used for.training cannot capture those patterns or behaviors. In a similar situation a genetic algorithm can find the global.optimum in a non-convex multidimensional function, but given a small set of data it cannot see through the cliffs and product the existence of valleys in the function's value. 

6roucho

  • Frazil ice
  • Posts: 296
  • Finance geek
    • View Profile
  • Liked: 0
  • Likes Given: 1
Re: Relevancy of Machine Learning to climatic models
« Reply #2 on: December 13, 2016, 03:57:58 AM »
Isn't the problem here the limits of models, rather than the deficiencies of algorithms?

Machine learning could produce better algorithms, and thus provide better results, but state change behaviours in complex systems, although deterministic, can be difficult to predict with any accuracy, due to limitations in mathematics.

Without a state change in mathematics itself (and who could predict that?) I don't think we'll ever get to the point of being able to know in advance how a complex system will evolve far from equilibrium at the edge of chaos, or back again.

Murray Gell-Mann famously called such systems "an accumulation of frozen accidents."

AbruptSLR

  • Multi-year ice
  • Posts: 19703
    • View Profile
  • Liked: 2268
  • Likes Given: 286
Re: Relevancy of Machine Learning to climatic models
« Reply #3 on: December 13, 2016, 04:33:22 PM »
Successful climate modeling needs to include a correct interpretation of likely anthropogenic input and correct interpretation of output to consider the role of Lorenz attractors (chaos theory) w.r.t. climate sensitivity.

In this regards, machine learn and AI are just sub-sets of systems theory (see the discuss of complex adaptive systems (CAS) in the first linked Wikipedia article and the associated second attached image).  Furthermore, I note that "systems theory" is synonymous with "cybernetics", and there are many lessons that can be used from such research that can be applied towards the goal of achieving a sustainable global socio-economic system :

https://en.wikipedia.org/wiki/Systems_theory

Extract: "Cybernetics is the study of the communication and control of regulatory feedback both in living and lifeless systems (organisms, organizations, machines), and in combinations of those. Its focus is how anything (digital, mechanical or biological) controls its behavior, processes information, reacts to information, and changes or can be changed to better accomplish those three primary tasks.

The terms "systems theory" and "cybernetics" have been widely used as synonyms.

Complex adaptive systems (CAS) are special cases of complex systems. They are complex in that they are diverse and composed of multiple, interconnected elements; they are adaptive in that they have the capacity to change and learn from experience. In contrast to control systems in which negative feedback dampens and reverses disequilibria, CAS are often subject to positive feedback, which magnifies and perpetuates changes, converting local irregularities into global features. Another mechanism, Dual-phase evolution arises when connections between elements repeatedly change, shifting the system between phases of variation and selection that reshape the system.

The term complex adaptive system was coined at the interdisciplinary Santa Fe Institute (SFI), by John H. Holland, Murray Gell-Mann and others. An alternative conception of complex adaptive (and learning) systems, methodologically at the interface between natural and social science, has been presented by Kristo Ivanov in terms of hypersystems. This concept intends to offer a theoretical basis for understanding and implementing participation of "users", decisions makers, designers and affected actors, in the development or maintenance of self-learning systems."

Also, I provide the following second link to a Wikipedia article on complex systems research where models of such complex systems uses formulae from chaos theory, statistical physics, information theory and non-linear dynamics.  Per the following extract: "Many real complex systems are, in practice and over long but finite time periods, robust. However, they do possess the potential for radical qualitative change of kind whilst retaining systemic integrity. Metamorphosis serves as perhaps more than a metaphor for such transformations."  Such insights are useful when considering the transformation of our current BAU based global socio-economic system, into what we will collectively become:

https://en.wikipedia.org/wiki/Complex_systems

Extract: "Complex systems present problems both in mathematical modelling and philosophical foundations. The study of complex systems represents a new approach to science that investigates how relationships between parts give rise to the collective behaviors of a system and how the system interacts and forms relationships with its environment.
The equations from which models of complex systems are developed generally derive from statistical physics, information theory and non-linear dynamics and represent organized but unpredictable behaviors of natural systems that are considered fundamentally complex. The physical manifestations of such systems are difficult to define, so a common choice is to identify "the system" with the mathematical information model rather than referring to the undefined physical subject the model represents.

Complexity and modeling

One of Hayek's main contributions to early complexity theory is his distinction between the human capacity to predict the behaviour of simple systems and its capacity to predict the behaviour of complex systems through modeling. He believed that economics and the sciences of complex phenomena in general, which in his view included biology, psychology, and so on, could not be modeled after the sciences that deal with essentially simple phenomena like physics. Hayek would notably explain that complex phenomena, through modeling, can only allow pattern predictions, compared with the precise predictions that can be made out of non-complex phenomena.

Complexity and chaos theory


Complexity theory is rooted in chaos theory, which in turn has its origins more than a century ago in the work of the French mathematician Henri Poincaré. Chaos is sometimes viewed as extremely complicated information, rather than as an absence of order. Chaotic systems remain deterministic, though their long-term behavior can be difficult to predict with any accuracy. With perfect knowledge of the initial conditions and of the relevant equations describing the chaotic system's behavior, one can theoretically make perfectly accurate predictions about the future of the system, though in practice this is impossible to do with arbitrary accuracy. Ilya Prigogine argued that complexity is non-deterministic, and gives no way whatsoever to precisely predict the future.

The emergence of complexity theory shows a domain between deterministic order and randomness which is complex. This is referred as the "edge of chaos".

When one analyzes complex systems, sensitivity to initial conditions, for example, is not an issue as important as within the chaos theory in which it prevails. As stated by Colander, the study of complexity is the opposite of the study of chaos. Complexity is about how a huge number of extremely complicated and dynamic sets of relationships can generate some simple behavioral patterns, whereas chaotic behavior, in the sense of deterministic chaos, is the result of a relatively small number of non-linear interactions.

Therefore, the main difference between chaotic systems and complex systems is their history. Chaotic systems do not rely on their history as complex ones do. Chaotic behaviour pushes a system in equilibrium into chaotic order, which means, in other words, out of what we traditionally define as 'order'. On the other hand, complex systems evolve far from equilibrium at the edge of chaos. They evolve at a critical state built up by a history of irreversible and unexpected events, which physicist Murray Gell-Mann called "an accumulation of frozen accidents." In a sense chaotic systems can be regarded as a subset of complex systems distinguished precisely by this absence of historical dependence. Many real complex systems are, in practice and over long but finite time periods, robust. However, they do possess the potential for radical qualitative change of kind whilst retaining systemic integrity. Metamorphosis serves as perhaps more than a metaphor for such transformations."
“It is not the strongest or the most intelligent who will survive but those who can best manage change.”
― Leon C. Megginson

AbruptSLR

  • Multi-year ice
  • Posts: 19703
    • View Profile
  • Liked: 2268
  • Likes Given: 286
Re: Relevancy of Machine Learning to climatic models
« Reply #4 on: December 13, 2016, 04:35:18 PM »
As a follow-on to my last post on "Systems Theory" and "Complex Systems" modeling, I provide the open access linked reference that uses technology substitution dynamic modeling within an Integrated Assessment Model, IAM, of the electricity sector.  This somewhat limited application of dynamic modeling indicates both the high likelihood that we will exceed the 2C limit this century, and that consideration of regional impacts of climate change is critical when assessing likely damage assessments:

A. M. Foley, P. B. Holden, N. R. Edwards, J.-F. Mercure, P. Salas, H. Pollitt, and U. Chewpreecha (2016), "Climate model emulation in an integrated assessment framework: a case study for mitigation policies in the electricity sector", Earth Syst. Dynam., 7, 119–132, doi:10.5194/esd-7-119-2016


http://www.earth-syst-dynam.net/7/119/2016/esd-7-119-2016.pdf

Abstract. We present a carbon-cycle–climate modelling framework using model emulation, designed for integrated assessment modelling, which introduces a new emulator of the carbon cycle (GENIEem).We demonstrate that GENIEem successfully reproduces the CO2 concentrations of the Representative Concentration Pathways when forced with the corresponding CO2 emissions and non-CO2 forcing. To demonstrate its application as part of the integrated assessment framework, we use GENIEem along with an emulator of the climate (PLASIMENTSem) to evaluate global CO2 concentration levels and spatial temperature and precipitation response patterns resulting from CO2 emission scenarios. These scenarios are modelled using a macroeconometric model (E3MG) coupled to a model of technology substitution dynamics (FTT), and represent different emissions reduction policies applied solely in the electricity sector, without mitigation in the rest of the economy. The effect of cascading uncertainty is apparent, but despite uncertainties, it is clear that in all scenarios, global mean temperatures in excess of 2 oC above pre-industrial levels are projected by the end of the century. Our approach also highlights the regional temperature and precipitation patterns associated with the global mean temperature change occurring in these scenarios, enabling more robust impacts modelling and emphasizing the necessity of focusing on spatial patterns in addition to global mean temperature change.

Finally, while "Systems Theory" and "Complex Systems" modeling is only beginning to be applied to climate change modeling, I note that even more dynamic progress is being made in the area of the regulation of DNA gene expression (a complex system critical to life):

https://en.wikipedia.org/wiki/Regulation_of_gene_expression

“It is not the strongest or the most intelligent who will survive but those who can best manage change.”
― Leon C. Megginson

AbruptSLR

  • Multi-year ice
  • Posts: 19703
    • View Profile
  • Liked: 2268
  • Likes Given: 286
Re: Relevancy of Machine Learning to climatic models
« Reply #5 on: December 13, 2016, 04:40:35 PM »
Murray Gell-Mann famously called such systems "an accumulation of frozen accidents."

I provide the following reference, co-authored by Murray Gell-Mann, that could be used to better address climate change issues including:  risk, insurance, revenue neutral carbon pricing, and other topics.  This reference makes it very clear that most humans (even most experts) have a very weak intuitive understanding of their own ignorance (which results in a poor understanding of gambles/risk that we are all exposed to w.r.t. climate consequences).

Ole Peters and Murray Gell-Mann (Feb. 2, 2016), "Evaluating gambles using dynamics," Chaos, DOI: 10.1063/1.4940236

http://scitation.aip.org/content/aip/journal/chaos/26/2/10.1063/1.4940236

Abstract: "Gambles are random variables that model possible changes in wealth. Classic decision theory transforms money into utility through a utility function and defines the value of a gamble as the expectation value of utility changes. Utility functions aim to capture individual psychological characteristics, but their generality limits predictive power. Expectation value maximizers are defined as rational in economics, but expectation values are only meaningful in the presence of ensembles or in systems with ergodic properties, whereas decision-makers have no access to ensembles, and the variables representing wealth in the usual growth models do not have the relevant ergodic properties. Simultaneously addressing the shortcomings of utility and those of expectations, we propose to evaluate gambles by averaging wealth growth over time. No utility function is needed, but a dynamic must be specified to compute time averages. Linear and logarithmic “utility functions” appear as transformations that generate ergodic observables for purely additive and purely multiplicative dynamics, respectively. We highlight inconsistencies throughout the development of decision theory, whose correction clarifies that our perspective is legitimate. These invalidate a commonly cited argument for bounded utility functions."


Also see:
http://www.newswise.com/articles/exploring-gambles-reveals-foundational-difficulty-behind-economic-theory-and-a-solution

Extract: " In the wake of the financial crisis, many started questioning different aspects of the economic formalism.

This included Ole Peters, a Fellow at the London Mathematical Laboratory in the U.K., as well as an external professor at the Santa Fe Institute in New Mexico, and Murray Gell-Mann, a physicist who was awarded the 1969 Nobel Prize in physics for his contributions to the theory of elementary particles by introducing quarks, and is now a Distinguished Fellow at the Santa Fe Institute. They found it particularly curious that a field so central to how we live together as a society seems so unsure about so many of its key questions.

So they asked: Might there be a foundational difficulty underlying our current economic theory? Is there some hidden assumption, possibly hundreds of years old, behind not one but many of the current scientific problems in economic theory? Such a foundational problem could have far-reaching practical consequences because economic theory informs economic policy.

As they report in the journal Chaos, from AIP Publishing, the story that emerged is a fascinating example of scientific history, of how human understanding evolves, gets stuck, gets unstuck, branches, and so on.



The key concepts of time and randomness are at the heart of their work. "Questions of an economic nature stood at the beginning of formal thinking about randomness in the 17th century," he explained. "These are all relatively young concepts -- there's nothing in Euclid about probability theory." Think of it simply in terms of: Should I bet money in a game of dice? How much should I pay for an insurance contract? What would be a fair price for a life annuity?
"All of these questions have something to do with randomness, and the way to deal with them in the 17th century was to imagine parallel worlds representing everything that could happen," Gell-Mann said. "To assess the value of some uncertain venture, an average is taken across those parallel worlds."

This concept was only challenged in the mid-19th century when randomness was used formally in a different context -- physics. "Here, the following perspective arose: to assess some uncertain venture, ask yourself how it will affect you in one world only -- namely the one in which you live -- across time," Gell-Mann continued.

"The first perspective -- considering all parallel worlds -- is the one adopted by mainstream economics," explained Gell-Mann. "The second perspective -- what happens in our world across time -- is the one we explore and that hasn't been fully appreciated in economics so far."
The real impact of this second perspective comes from acknowledging the omission of the key concept of time from previous treatments. "We have some 350 years of economic theory involving randomness in one way only -- by considering parallel worlds," said Peters. "What happens when we switch perspectives is astonishing. Many of the open key problems in economic theory have an elegant solution within our framework."

In terms of applications for their work, its key concept can be used "to derive an entire economic formalism," said Peters. In their article, Peters and Gell-Mann explore the evaluation of a gamble. For example, is this gamble better than that gamble? This is the fundamental problem in economics. And from a conceptually different solution there follows a complete new formalism.
They put it to the test after their friend Ken Arrow -- an economist who was the joint winner of the Nobel Memorial Prize in Economic Sciences with John Hicks in 1972 -- suggested applying the technique to insurance contracts. "Does our perspective predict or explain the existence of a large insurance market? It does -- unlike general competitive equilibrium theory, which is the current dominant formalism," Peters said.

And so a different meaning of risk emerges -- taking too much risk is not only psychologically uncomfortable but also leads to real dollar losses. "Good risk management really drives performance over time," Peters added. "This is important in the current rethinking of risk controls and financial market infrastructure."

This concept reaches far beyond this realm and into all major branches of economics. "It turns out that the difference between how individual wealth behaves across parallel worlds and how it behaves over time quantifies how wealth inequality changes," explained Peters. "It also enables refining the notion of efficient markets and solving the equity premium puzzle."

One historically important application is the solution of the 303-year-old St. Petersburg paradox, which involves a gamble played by flipping a coin until it comes up tails and the total number of flips, n, determines the prize, which equals $2 to the nth power. "The expected prize diverges -- it doesn't exist," Peters elaborated. "This gamble, suggested by Nicholas Bernoulli, can be viewed as the first rebellion against the dominance of the expectation value -- that average across parallel worlds -- that was established in the second half of the 17th century."

What's the next step for their work? "We're very keen to develop fully the implications for welfare economics and questions of economic inequality. This is a sensitive subject that needs to be dealt with carefully, including empirical work," noted Peters. "Much is being done behind the scenes -- since this is a conceptually different way of doing things, communication is a challenge, and our work has been difficult to publish in mainstream economics journals."

Their results described in Chaos are easily generalized, which is necessary to reinterpret the full formalism. But it "may not add very much in practical terms, and it gets a little technical." So that's a future "to-do item" for Peters and Gell-Mann.

"Our Chaos paper is a recipe for approaching a wide range of problems," said Peters. "So we're now going through the entire formalism with our collaborators to see where else our perspective is useful.""

See also the following linked article entitled "Exploring gambles reveals foundational difficulty behind economic theory (and a solution)":


http://phys.org/news/2016-02-exploring-gambles-reveals-foundational-difficulty.html

Extract: ""Our Chaos paper is a recipe for approaching a wide range of problems," said Peters. "So we're now going through the entire formalism with our collaborators to see where else our perspective is useful.""
« Last Edit: December 13, 2016, 04:49:54 PM by AbruptSLR »
“It is not the strongest or the most intelligent who will survive but those who can best manage change.”
― Leon C. Megginson

Bernard

  • Frazil ice
  • Posts: 148
    • View Profile
  • Liked: 23
  • Likes Given: 24
Re: Relevancy of Machine Learning to climatic models
« Reply #6 on: December 13, 2016, 07:11:35 PM »
@AbruptSLR

Thanks for all the background reading material you bring in, but none of them unless I miss something seems to address directly the question set in this thread. Is machine learning bringing something new regarding climate/weather predictions?

It seems to be applied successfully for example in prediction of stock markets, see
http://fr.slideshare.net/iknowfirst/machine-learning-stock-market-and-chaos-56626648

AbruptSLR

  • Multi-year ice
  • Posts: 19703
    • View Profile
  • Liked: 2268
  • Likes Given: 286
Re: Relevancy of Machine Learning to climatic models
« Reply #7 on: December 13, 2016, 07:15:12 PM »
@AbruptSLR

Thanks for all the background reading material you bring in, but none of them unless I miss something seems to address directly the question set in this thread. Is machine learning bringing something new regarding climate/weather predictions?

It seems to be applied successfully for example in prediction of stock markets, see
http://fr.slideshare.net/iknowfirst/machine-learning-stock-market-and-chaos-56626648

I don't think that machine learning will change the physics being modeled but it could help with selecting both the correct input and the correct interpretation of risk for the output.
“It is not the strongest or the most intelligent who will survive but those who can best manage change.”
― Leon C. Megginson

AbruptSLR

  • Multi-year ice
  • Posts: 19703
    • View Profile
  • Liked: 2268
  • Likes Given: 286
Re: Relevancy of Machine Learning to climatic models
« Reply #8 on: December 13, 2016, 07:19:08 PM »
The linked article is entitled: "AI to Use Satellite Imaging Tech to Predict Food Crises Before They Happen", and it illustrates how machine learning can be used to better manage climate change risk:

http://www.natureworldnews.com/articles/33834/20161212/earth-food-food-shortages-satellite-geospatial-data-usda-descartes-labs.htm
“It is not the strongest or the most intelligent who will survive but those who can best manage change.”
― Leon C. Megginson

Bernard

  • Frazil ice
  • Posts: 148
    • View Profile
  • Liked: 23
  • Likes Given: 24
Re: Relevancy of Machine Learning to climatic models
« Reply #9 on: December 13, 2016, 09:11:18 PM »
AbruptSLR your latest post is spot on :)

The company behind the quoted article "AI to Use Satellite Imaging Tech to Predict Food Crises Before They Happen" is called Descartes Labs http://www.descarteslabs.com/

"The Descartes Labs platform combines massive data sources—whether public, private or proprietary—onto a single system. The data is then transformed into action by applying machine learning at scale to unlock the value in those datasets. We transform petabytes of data into action for your business.

Our platform enables a new way of doing science. We are asking new kinds of questions and solving the most challenging forecasting problems facing organizations today."

Of course this is a commercial pitch, there is a huge competition is this market where growth is impressive. "According to the new market research report on artificial intelligence, this market is expected to be worth USD 16.06 billion by 2022, growing at a CAGR of 62.9% from 2016 to 2022." http://www.researchandmarkets.com/reports/3979203/artificial-intelligence-market-by-technology

What I try to figure here is if there is a real opportunity for climate science here (beyond the marketing hype)

AbruptSLR

  • Multi-year ice
  • Posts: 19703
    • View Profile
  • Liked: 2268
  • Likes Given: 286
Re: Relevancy of Machine Learning to climatic models
« Reply #10 on: December 13, 2016, 09:43:20 PM »
The first link leads to a website on "Climate Informatics", with up-dated information on machine learning and climate change:

http://www.climateinformatics.org/

The second link leads to a pdf of the "Proceedings of the 6th International Workshop on Climate Informatics: CI 2016"

https://opensky.ucar.edu/islandora/object/technotes:543

Abstract: "Climate informatics is an emerging research area that combines the fields of climate science and data science (specifically machine learning, data mining and statistics) to accelerate scientific discovery in climate science. The annual climate informatics workshop, held at NCAR's Mesa Lab since 2012, promotes new collaborations and discusses new methods and directions for this emerging field. This year's proceedings contain 34 peer-reviewed short papers presented at the workshop, which describe many new methods and advances in the field. Making these papers available to all interested researchers is essential to maximize further advances in this important field."


The third link leads to a book based on 2014 information entitled: "Machine Learning and Data Mining Approaches to Climate Science"

http://www.springer.com/us/book/9783319172194

Summary: "This book presents innovative work in Climate Informatics, a new field that reflects the application of data mining methods to climate science, and shows where this new and fast growing field is headed. Given its interdisciplinary nature, Climate Informatics offers insights, tools and methods that are increasingly needed in order to understand the climate system, an aspect which in turn has become crucial because of the threat of climate change. There has been a veritable explosion in the amount of data produced by satellites, environmental sensors and climate models that monitor, measure and forecast the earth system. In order to meaningfully pursue knowledge discovery on the basis of such voluminous and diverse datasets, it is necessary to apply machine learning methods, and Climate Informatics lies at the intersection of machine learning and climate science. This book grew out of the fourth workshop on Climate Informatics held in Boulder, Colorado in Sep. 2014."


The fourth link leads to a 2014 article entitled: "What Machine Learning Can Do For Climate Science"

http://www.planetforward.org/2014/05/12/what-machine-learning-can-do-for-climate-science

“It is not the strongest or the most intelligent who will survive but those who can best manage change.”
― Leon C. Megginson

6roucho

  • Frazil ice
  • Posts: 296
  • Finance geek
    • View Profile
  • Liked: 0
  • Likes Given: 1
Re: Relevancy of Machine Learning to climatic models
« Reply #11 on: December 14, 2016, 12:36:44 AM »
I provide the following reference, co-authored by Murray Gell-Mann, that could be used to better address climate change issues including:  risk, insurance, revenue neutral carbon pricing, and other topics.  This reference makes it very clear that most humans (even most experts) have a very weak intuitive understanding of their own ignorance (which results in a poor understanding of gambles/risk that we are all exposed to w.r.t. climate consequences).
Absolutely.

I had the opportunity in the 1990s to work on the project at Lloyd’s to calculate the premium for reinsuring the bad long-tailed risk that was threatening the existence of the corporation. As part of that, we tried to understand why it had happened.

We highlighted three main causes:

•   Competition for profits
•   Consequences that played out in the future, beyond the likely incumbency of underwriters
•   A systematic underestimation of catastrophe risk

The irony is that insurance is well-served with excellent models, whose development budgets can often be many times more than pure science has to play with, but in this case underwriters with no mathematical knowledge chose to ignore them.

[Another is that if you substitute politician for underwriter, this relatively small business crisis (Lloyd's survived, even if all the Names who financed the insurance didn’t) played out much the same as the much larger catastrophe of climate change.]

I think that one area where machine learning has a lot to offer is in the interpretation of science by policymakers. If we go back to finance, systems that learn about markets can be exceptionally effective traders, because what humans tend to do in uncertain situations is throw away the mathematics, and instead bet by instinct, which black swan theory (and recent history) suggests is systematically optimistic when it comes to catastrophic events. The human instinct is to double down on optimistic bets when no good can come of the worst-case scenario.

Which is what some politicians are doing right now with climate change.
« Last Edit: December 14, 2016, 06:36:30 AM by 6roucho »

DrTskoul

  • Nilas ice
  • Posts: 1455
    • View Profile
  • Liked: 210
  • Likes Given: 60
Re: Relevancy of Machine Learning to climatic models
« Reply #12 on: December 14, 2016, 01:05:07 AM »
Machine leaning, data analytics etc etc is used to detect relationships and patterns in massive amounts of data. Data can be real ( measurements ) or modeled ( climate system simulations ). 

I hear about it a lot in chem eng  research circles ( e.g. catalyst development or process simulation, optimization and contol). A few years ago we had high throughput experimentation.

Analysing satellite images for patterns is a perfect example of ML application.

ML won't find a better model for arctic ice dynamics.

gerontocrat

  • Multi-year ice
  • Posts: 20376
    • View Profile
  • Liked: 5289
  • Likes Given: 69
Re: Relevancy of Machine Learning to climatic models
« Reply #13 on: February 18, 2019, 12:19:52 PM »
https://www.bbc.co.uk/news/science-environment-47267081

Machine learning might not be all its cracked up to be.

AAAS: Machine learning 'causing science crisis'
Machine-learning techniques used by thousands of scientists to analyse data are producing results that are misleading and often completely wrong.
Quote
Dr Genevera Allen from Rice University in Houston said that the increased use of such systems was contributing to a “crisis in science”.

She warned scientists that if they didn’t improve their techniques they would be wasting both time and money. Her research was presented at the American Association for the Advancement of Science in Washington.

A growing amount of scientific research involves using machine learning software to analyse data that has already been collected. This happens across many subject areas ranging from biomedical research to astronomy. The data sets are very large and expensive.

'Reproducibility crisis'
But, according to Dr Allen, the answers they come up with are likely to be inaccurate or wrong because the software is identifying patterns that exist only in that data set and not the real world.
“Often these studies are not found out to be inaccurate until there's another real big dataset that someone applies these techniques to and says ‘oh my goodness, the results of these two studies don't overlap‘," she said.

“There is general recognition of a reproducibility crisis in science right now. I would venture to argue that a huge part of that does come from the use of machine learning techniques in science.” The “reproducibility crisis” in science refers to the alarming number of research results that are not repeated when another group of scientists tries the same experiment. It means that the initial results were wrong.

"Para a Causa do Povo a Luta Continua!"
"And that's all I'm going to say about that". Forrest Gump
"Damn, I wanted to see what happened next" (Epitaph)