Challenges in #MachineLearning Adaptation 

It’s very possible that at the time you read these lines, you’ve already use the result of machine learning algorithms several times today: your favorite social network might have already suggested you a new friend-list, search motor certain pages relevant to your history etc. You’ve dictated a message on your phone, read an article that was in your news feed based on your preferences and that may have been translated automatically. And even without using a computer, you may have been listening to news or just heard the weather forecast.

 

We are living in a world where most transactions and stock decisions that make and undo an economy, and more and even medical diagnoses are based on the qualities of the algorithm than on those of human experts, incapable of treating the mountain of information necessary for a relevant decision-making.

 

Such algorithms learn from data in order to issue predictions and make data based decisions and are called machine learning. These automated learning algorithms and systems have undergone significant advances in recent years thanks to the availability of large volumes of data and intensive computing, as well as interesting advances in optimization. A major feature of deep learning is its ability to learn descriptors while clustering the data. However, there are, many limitations and challenges that we have classified as: Data sources; Symbolic representations vs continuous representations; Continuous and endless learning; Learning under constraints; Computing architectures; Unsupervised learning; Learning process with human intervention, explanations …

 

Data Sources

There are many challenges in this area such as, learning from heterogeneous data available on multiple channels; manage uncertain information; identify and process with rare events beyond the purely statistical approaches; work by combining sources of knowledge and data sources; integrating models and ontologies into the learning process; and finally get good learning performance with little data, when massive data sources are not available.

 

Symbolic Representations vs Continuous Representations

Continuous representations allow the machine learning (ML) algorithm to approach complex functions, while symbolic representations are used to learn rules and symbolic models. The most significant recent advances concern continuous representations. These, however, leave out the reasoning while it would be desirable to integrate it in the continuous representation to be able to make inferences on the numerical data. Moreover, in order to exploit the power of deep learning, it may be useful to define continuous representations of symbolic data, as has been done for example for text with word2vec and text2vec representations.

 

Continuous and endless learning

Some AI systems are expected to be resilient, in order to be able to operate 24/7 without interruption. Interesting advances have been made in lifelong learning systems that will continually build new knowledge while they are operating. The challenge here is the ability of AI systems to operate online in real time, while being able to revise existing beliefs, learned from previous cases, independently. Self-priming is an option for these systems because it allows the use of elementary knowledge acquired at the beginning of the operation to guide future learning tasks, as in the NELL (NeverEnding Langage Learning) system developed at the Carnegie-Mellon University.

Lack of purchase generic levitra important site exercise or physical activity also contributes to reduced blood flow to the reproductive organs. Some of these are the tablets, jelly, and appalachianmagazine.com order generic cialis Kamagra gels. Subsequently, internal changes mastercard cialis appalachianmagazine.com had exterior manifestations in terms of social, career and family interactions. So, you can make orders generic viagra uk of Pfizer and now the day of the sufferers who were not aware of having this disorder was purely due to physical or medical causes of impotence, including diabetes, circulatory, neurological, and urological conditions.

Learning under constraints

Privacy Protection is undoubtedly the most important constraint to be taken into account. Researchers specializing in Machine learning recently recognized the need to protect privacy while continuing to learn from personal data (from records about individuals). To fulfill this purpose, privacy-oriented learning systems are being developed by researchers. Generally, machine learning must take into account other external constraints such as decentralized data or energy limitations. Researches on the general problem of machine learning with external constraints is therefore necessary.

 

Computing architectures

Modern machine learning systems require intensive computing performances and efficient data storage to scale up data size and problem dimensions. Algorithms will be run on GPUs and other powerful architectures; Data and processes must be distributed across multiple processors. New research needs to focus on improving machine learning algorithms and problem formulation to make the most of these computing architectures.

 

Unsupervised Learning

The most remarkable results obtained in the field of machine learning are based on supervised learning, that is, learning from examples in which the expected result is provided with the input data. This involves prior labeling of data with the corresponding expected results, a process that requires large-scale data. The Amazon’s Mechanical Turk (www.mturk.com) is a perfect example of how large companies mobilize human resources to annotate data. But the vast majority of data exists with no expected result, ie without desired annotation or class name. It is therefore necessary to develop unsupervised learning algorithms to manage this enormous amount of unlabeled data. In some cases, a minimal amount of human supervision can be used to guide the unsupervised algorithm.

 

Learning process with human intervention, explanations

The challenges relate to the establishment of a natural collaboration between the machine learning algorithms and the users in order to improve the learning process. To do this, machine learning systems must be able to show their progress in a form that is understandable to humans. Moreover, it should be possible for the human user to obtain explanations from the system on any result obtained. These explanations would be provided during the learning process and could be linked to input data or intermediate representations. They could also indicate levels of confidence, as appropriate.

 

Transfer Learning

Transfer learning is useful when little data is available for learning a task. It consists in using for a new task knowledge that has been acquired from another task and for which more data is available. This is a rather old idea (1993), but the results remain modest because it is difficult to implement. Indeed, it implies being able to extract the knowledge that the system has acquired in the first place, but there is no general solution to this problem (how, how to reuse them …). Another approach to transfer learning is “shaping”. It involves learning a simple task and then gradually becoming more complex until it reaches the target task. There are some examples of this procedure in the literature, but no general theory.

Cheap Tents On Trucks Bird Watching Wildlife Photography Outdoor Hunting Camouflage 2 to 3 Person Hide Pop UP Tent Pop Up Play Dinosaur Tent for Kids Realistic Design Kids Tent Indoor Games House Toys House For Children