Challenges in #MachineLearning Adaptation 

It’s very possible that at the time you read these lines, you’ve already use the result of machine learning algorithms several times today: your favorite social network might have already suggested you a new friend-list, search motor certain pages relevant to your history etc. You’ve dictated a message on your phone, read an article that was in your news feed based on your preferences and that may have been translated automatically. And even without using a computer, you may have been listening to news or just heard the weather forecast.

 

We are living in a world where most transactions and stock decisions that make and undo an economy, and more and even medical diagnoses are based on the qualities of the algorithm than on those of human experts, incapable of treating the mountain of information necessary for a relevant decision-making.

 

Such algorithms learn from data in order to issue predictions and make data based decisions and are called machine learning. These automated learning algorithms and systems have undergone significant advances in recent years thanks to the availability of large volumes of data and intensive computing, as well as interesting advances in optimization. A major feature of deep learning is its ability to learn descriptors while clustering the data. However, there are, many limitations and challenges that we have classified as: Data sources; Symbolic representations vs continuous representations; Continuous and endless learning; Learning under constraints; Computing architectures; Unsupervised learning; Learning process with human intervention, explanations …

challenges in machine learning

Data Sources

There are many challenges in this area such as, learning from heterogeneous data available on multiple channels; manage uncertain information; identify and process with rare events beyond the purely statistical approaches; work by combining sources of knowledge and data sources; integrating models and ontologies into the learning process; and finally get good learning performance with little data, when massive data sources are not available.

 

Symbolic Representations vs Continuous Representations

Continuous representations allow the machine learning (ML) algorithm to approach complex functions, while symbolic representations are used to learn rules and symbolic models. The most significant recent advances concern continuous representations. These, however, leave out the reasoning while it would be desirable to integrate it in the continuous representation to be able to make inferences on the numerical data. Moreover, in order to exploit the power of deep learning, it may be useful to define continuous representations of symbolic data, as has been done for example for text with word2vec and text2vec representations.

 

Continuous and endless learning

Some AI systems are expected to be resilient, in order to be able to operate 24/7 without interruption. Interesting advances have been made in lifelong learning systems that will continually build new knowledge while they are operating. The challenge here is the ability of AI systems to operate online in real time, while being able to revise existing beliefs, learned from previous cases, independently. Self-priming is an option for these systems because it allows the use of elementary knowledge acquired at the beginning of the operation to guide future learning tasks, as in the NELL (NeverEnding Langage Learning) system developed at the Carnegie-Mellon University.

 

Learning under constraints

Privacy Protection is undoubtedly the most important constraint to be taken into account. Researchers specializing in Machine learning recently recognized the need to protect privacy while continuing to learn from personal data (from records about individuals). To fulfill this purpose, privacy-oriented learning systems are being developed by researchers. Generally, machine learning must take into account other external constraints such as decentralized data or energy limitations. Researches on the general problem of machine learning with external constraints is therefore necessary.

 

Computing architectures

Modern machine learning systems require intensive computing performances and efficient data storage to scale up data size and problem dimensions. Algorithms will be run on GPUs and other powerful architectures; Data and processes must be distributed across multiple processors. New research needs to focus on improving machine learning algorithms and problem formulation to make the most of these computing architectures.

 

Unsupervised Learning

The most remarkable results obtained in the field of machine learning are based on supervised learning, that is, learning from examples in which the expected result is provided with the input data. This involves prior labeling of data with the corresponding expected results, a process that requires large-scale data. The Amazon’s Mechanical Turk (www.mturk.com) is a perfect example of how large companies mobilize human resources to annotate data. But the vast majority of data exists with no expected result, ie without desired annotation or class name. It is therefore necessary to develop unsupervised learning algorithms to manage this enormous amount of unlabeled data. In some cases, a minimal amount of human supervision can be used to guide the unsupervised algorithm.

 

Learning process with human intervention, explanations

The challenges relate to the establishment of a natural collaboration between the machine learning algorithms and the users in order to improve the learning process. To do this, machine learning systems must be able to show their progress in a form that is understandable to humans. Moreover, it should be possible for the human user to obtain explanations from the system on any result obtained. These explanations would be provided during the learning process and could be linked to input data or intermediate representations. They could also indicate levels of confidence, as appropriate.

 

Transfer Learning

Transfer learning is useful when little data is available for learning a task. It consists in using for a new task knowledge that has been acquired from another task and for which more data is available. This is a rather old idea (1993), but the results remain modest because it is difficult to implement. Indeed, it implies being able to extract the knowledge that the system has acquired in the first place, but there is no general solution to this problem (how, how to reuse them …). Another approach to transfer learning is “shaping”. It involves learning a simple task and then gradually becoming more complex until it reaches the target task. There are some examples of this procedure in the literature, but no general theory.

How Artificial Intelligence is impacting the Tourism Sector?

Artificial intelligence has existed for several years, yet we witness that it is now reaching another dimension, thanks to more powerful computers and the multiplication of available data. By its capacity to raise all sectors of activity, it is undeniable that it represents a great interest for Tourism. With the wealth of data available to professionals, today there are a multitude of technologies and recommendations applications, real time chatbot and personalized concierge services. The aim is to simplify the work of tourism industry professionals so that they can return to their core business with powerful tools and technologies and make an important difference in terms of profit and customer satisfaction. But the question one must ask is how to use Artificial Intelligence wisely?

Artificial Intelligence and Tourism

The first point: if we think about tourism future, in terms of types of travelers, its certain that we will be dealing with several categories of profiles, which may overlap. Our first category, for example, will be constituted, as is the case today, of travelers wishing to disconnect radically from their “everyday” environment in order to immerse themselves in another culture. And this, by all possible means.

Others, more vigilant, are the second category that will want to practice simple trips, without risks, even without surprises, neither good nor bad. This does not exclude, on the contrary, the survival of an adventure tourism.

For, the last profile, the purpose of a journey will be less the destination than the experience that one can have there. They will travel to learn how to cook a rare product or to learn a new activity based on information provided by our peers. The purpose of their travel will be based on learning.

Whatever the size of the group and the number of establishments it counts, it seems to me that we are moving towards a world where the tourist supply will continue to increase, thanks to two levers: new destinations and new traveler’s profiles. It will be required to be extremely flexible towards the customer’s expectations, to which one must respond with the development of innovative services to accompany them at each stage of their journey before, during and after their stay .

 

How can AI added value be applied to Tourism?
By Customization. And that is what profoundly changes the ins and outs. Rather than bringing the same experience to the same type of travel, artificial intelligence offers the possibility of matching the desires, habits, preferences of the tourist with the proposed product. “Artificial intelligence makes a data pool meaningful. By learning what the customer is looking for, buying, and loving, it makes it possible to generate customized and targeted offers that are more likely to be converted into a purchase.

Today, cognitive systems are capable of interacting in natural language, they can process a multitude of structured and unstructured data, developed with geo-localized content, and learn from each interaction. These systems will rapidly become essential in the development of strategic topics for the industry, such as “smarter destination”, on the personalization of the customer experience and its loyalty, as well as on the provision of management, analysis and Marketing, all this by using BigData. These services will be an asset to make the whole of the tourism sector more efficient by helping the actors and structures in place.

 

How far can artificial intelligence push the tourism industry?
Not up to replace the human. Robots are used for certain tasks, but not as a replacement for humans, although, in the long term, this could happen, but the problem of the energy that robots consume must be solved. Referring to artificial intelligence is often trying to compare with human intelligence, it’s important to notice that the aim of cognitive systems is NOT to replace human beings; Robots cannot reason or learn as a human being can do. They serve the needs and imagination of tourism professionals who, with the help of partners, take benefit from them thanks to their knowledge.

 

Like I’ve mentioned above that AI isn’t a new technology, we have been interested init since the 50/60 years, but if today the subject seems quite new, it is because the data is only available now. Tourism, like all industries, is digitized and gives a potentiality of data where one can apply machine learning. So AI is a revolution in progress, to the extent that it leads to new ways of thinking about the supplier’s offer.

How #DeepLearning is revolutionizing #ArtificialIntelligence

This learning technology, based on artificial neural networks, have completely turned upside down the field of artificial intelligence in less than five years. “It’s such a rapid revolution that we have gone from a somewhat obscure system to a system used by millions of people in just two years” confirms Yann Lecun, one of deep learning and artificial intelligence’s creator.

All major tech companies, such as Google, IBM, Microsoft, Facebook, Amazon, Adobe, Yandex and even Baidu, are using. This system of learning and classification, based on digital “artificial neural networks”, is used concurrently by Siri, Cortana and Google Now to understand the voice, to be able to learn to recognize faces.

 

What is “Deep Learning”?

 

In concrete terms, deep learning is a learning process of applying deep neural network technologies enabling a program to solve problems, for example, to recognize the content of an image or to understand spoken language – complex challenges on which the artificial intelligence community has profoundly worked on.

 

To understand deep learning, we must return to supervised learning, a common technique in AI, allowing the machines to learn. Basically, for a program to learn to recognize a car, for example, it is “fed” with tens of thousands of car images, labeled etc. A “training”, which may require hours or even days of work. Once trained, the program can recognize cars on new images. In addition to its implementation in the field of voice recognition with Siri, Cortana and Google Now, deep learning is primarily used to recognize the content of images. Google Maps uses it to decrypt text present in landscapes, such as street numbers. Facebook uses it to detect images that violate its terms of use, and to recognize and tag users in published photos – a feature not available in Europe. Researchers use it to classify galaxies.

 

Deep learning also uses supervised learning, but the internal architecture of the machine is different: it is a “network of neurons”, a virtual machine composed of thousands of units (Neurons) that perform simple small calculations. The particularity is that the results of the first layer of neurons will serve as input to the calculation of others. This functioning by “layers” is what makes this type of learning “profound”.

 

One of the deepest and most spectacular achievements of deep learning took place in 2012, when Google Brain, the deep learning project of the American firm, was able to “discover” the cat concept by itself. This time, learning was not supervised: in fact, the machine analyzed, for three days, ten million screen shots from YouTube, chosen randomly and, above all, unlabeled. And at the end of this training, the program had learned to detect heads of cats and human bodies – frequent forms in the analyzed images. “What is remarkable is that the system has discovered the concept of cat itself. Nobody ever told him it was a cat. This marked a turning point in machine learning, “said Andrew Ng, founder of the Google Brain project, in the Forbes magazine columns.

 

Why are we talking so much today?

 

The basic ideas of deep learning go back to the late 80s, with the birth of the first networks of neurons. Yet this method only comes to know its hour of glory since past few years. Why? For if the theory were already in place, the practice appeared only very recently. The power of today’s computers, combined with the mass of data now accessible, has multiplied the effectiveness of deep learning.

 

“By taking software that had written in the 1980s and running them on a modern computer, results are more interesting” says Andrew Ng. Forbes.

 

This field of technology is so advanced that experts now are capable of building more complex neural networks, and the development of unsupervised learning which gives a new dimension to deep learning. Experts confirms that the more they increase the number of layers, the more the networks of neurons learn complicated and abstract things that correspond more to the way of a human reasoning. For Yann Ollivier, deep learning will, in a timeframe of 5 to 10 years, become widespread in all decision-making electronics, as in cars or aircraft. He also thinks of the aid to diagnosis in medicine will be more powerful via some special networks of neurons. The robots will also soon, according to him, endowed with this artificial intelligence. “A robot could learn to do housework on its own, and that would be much better than robot vacuums, which are not so extraordinaire for him!

 

At Facebook, Yann LeCun wants to use deep learning “more systematically for the representation of information”, in short, to develop an AI capable of understanding the content of texts, photos and videos published by the surfers. He also dreams of being able to create a personal digital assistant with whom it would be possible to dialogue by voice.

 

The future of deep learning seems very bright, but Yann LeCun remains suspicious: “We are in a very enthusiastic phase, it is very exciting. But there are also many nonsense told, there are exaggerations. We hear that we will create intelligent machines in five years, that Terminator will eliminate the human race in ten years … There are also great hopes that some put in these methods, which may not be concretized”.

 

In recent months, several personalities, including Microsoft founder Bill Gates, British astrophysicist Stephen Hawking and Tesla CEO Elon Musk, expressed their concerns about the progress of artificial intelligence, potentially harmful. Yann LeCun is pragmatic, and recalls that the field of AI has often suffered from disproportionate expectations of it. He hopes that, this time, discipline will not be the victim of this “inflation of promises”.

 

Sources:

Secure #IOT: and if #BigData was the key?

By 2020, the planet will have more than 30 billion connected objects according to IDC. The security of these objects is a major discussion topic. Ensuring the security, reliability, resilience, and stability of these devices and services should be a critical concern not only for manufacturer, companies using them but also for the end user. Security solutions abound on the market, but has anyone just thought of Big Data?

 

The Internet of objects is third industrial technological revolution, enabling companies to work smarter, faster and of course in a more profitable way. IOT represents endless and challenging opportunities, and above all, it shows that a full-fledged ecosystem is being created. This is very different from big data, because most companies consider big data to be static; the data is generated in logs that have utility only where they are, because there is no connectivity. With the Internet of objects, the data is mobile.

 

A good example of the potential created by the Internet of objects is the work done by Deloitte and a medical device manufacturer in order to optimize the management of chronic diseases in patients with implanted devices. They have established remote data transmissions from patient pacemakers. Pacemakers communicate via Bluetooth at low frequency and contact the healthcare provider using a handset. With this connected object, the physician can obtain real time information to better determine the treatment protocols.

 

However, there’s one critical issue that still need to be addressed to facilitate the Internet of objects adoption by every organization, and this issue concerns the IOT security as well as all the elements that makes it up. With billions of objects and terminals connected to the Internet, including cars, homes, toasters, webcams, parking meters, portable objects, factories, oil platforms, energy networks and Heavy equipment, the Internet of objects abruptly multiplies the surface of threats, increasing the number of vulnerabilities and creating millions of opportunities for threats and attacks.

IOT Risk Management

The recent DDoS attack illustrates the alarming dangers and risks associated with unsecured devices and components of Internet of objects. This should certainly have the effect of raising awareness for businesses and individuals, and should lead them to take actions for the security of Internet of objects. According to a recent study released by computer security firm ESET and the NCSA (cyber security alliance), about 40% of respondents in the US have no confidence in the security and privacy of connected objects. So these security issues will remain at the forefront as long as manufacturers will not seriously removed security vulnerabilities, and companies won’t increase their internal cybersecurity measures to effectively detect and counter future security threats. Although it is necessary to take into account many parameters to secure the Internet of the objects (security of the terminals, network security, etc.), one of the key pieces of the puzzle is to determine how to take advantage of massive quantities of data continuously generated by the devices.

 

A data-driven approach to prevent IOT cyber attacks

 

Big data plays a crucial role in protecting a company and its assets against cyber threats. The future of the fight against IOT cybercrime will be based on the use of data for cybersecurity. According to a recent Forrester report, “Internet object security means monitoring at least 10 times, if not more than 100 times more physical devices, connections, authentications and data transfer events as today. Having a better ability to collect event data and intelligently analyze them through huge data sets will be crucial to the security of connected systems. “

Given all this, companies need to think about these two following things to prepare for this new era …

 

The first is that companies need to rethink the security perimeter. Recent attacks that have targeted connected objects have made clear that the “security perimeter” is now more conceptual than physical. The constantly evolving nature of our new hyperconnected world also leads to the constant evolution of threats. As the technical community continues to connect the world and contribute to innovations that improve home security, improve medical care and transform transport, it is clear that the hackers will seek to exploit these same innovations for harmful purposes. We need to rethink the perimeter of security as the corporate edge continues to expand beyond the traditional borders to which we were used to.

 

Then, the detection of the threats must adapt to the magnitude of the connected objects. The world continues to hyper-connect, the number of security events that any enterprise must store, consult and analyze are also increasing significantly. Having a cybersecurity platform capable of supporting billions of events is essential to ensure total supervision of all devices connecting to and accessing a company’s network. The use of technologies such as #MachineLearning for the detection of anomalies will allow companies to continue to detect suspicious behavior on the workstations without any human intervention. The ML scalability coupled with the Internet of the objects will be the key to the anticipated detection of the threats specific to IOT.

 

As we know, by 2020, the planet will have more than 30 billion connected objects. To get the most out of these revolutionary innovations and prevent them from becoming a nightmare in terms of IT security, organizations will have to learn how to manage, process, store, analyze and redistribute a vertiginous volume of data in real time and all of this by respecting security norms.. We increasingly depend on these devices for essential services, and their behavior may have global reach and impact.

 

Sources:

#MachineLearning: How #PredictiveAnalytics reinvents Customer’s Satisfaction

Billions and trillions of data is collected on customer behavior from the huge platform called internet. To these are added valuable information gathered by every organization from different sectors. In this mine of information, Machine learning, pursues an ultimate goal: to better understand customers in order to offer them the best experience possible by offering them the product or service most likely to their need. It’s analytical power and the advance in artificial intelligence allows companies to take advantage of the wealth of data they collect.

At this point we all know that #bigdata is worth nothing, nada, without proper decryption. This is where machine learning or “automatic learning” comes into action. With its power of analysis, this field of artificial intelligence extracts the valuable information from the mass data. In other words: it enables to turn the lead into gold by simplifying the life of the customer and improving its satisfaction thanks to the precise analysis of its way of purchase.

 

Artificial Intelligence: algorithms and insights

Since its first use in the general public in the late 1990s, the machine learning have never stopped to make talk about it. Its recent victory was in March 2016 via AlphaGo, the software of Google, against the legendary Lee Sedol. We’ve witnessed AlphaGo’s most notable examples of deep learning, which was, the ability of a machine to independently analyze sums of data with an extremely high level of performance.

If such technological power remains exceptional, all of us daily experience the small machine learning without knowing it. How? Well, just surf on Amazon, LinkedIn, Spotify or even Netflix to see these platforms automatically offer suggestions according to their precise taste. These associations of ideas remain pertinent on subjects as fine as the interest for a film, a song, a service or a cross purchase. It is a much less superficial intelligence than it seems but with concrete results.

 

From big data to automatic learning

Well-resourced with quality data, the algorithm analyze deeply in the vast meadows of digital world. They cross distant data from each other to reveal information never brought to light. These algorithms bring us the astonishing results which a human mind would have swept away. For example, in a customer journey, deep learning allows to discover that the intention of purchase can be correlated with an action at precise moment of purchasing action. With automatic learning, one can therefore target with precision every important thing that human understanding can escape.

 

Machine learning: better tracking of customer routes

According to Salesforce’s state-of-the-art survey published in 2016, customer engagement is a top priority for organizations. Customer satisfaction is the main reason for success, even surpassing revenue growth and the acquisition of new customers. In this context, Machine learning is thus a major ally.

From an operational point of view, most of the machine learning applications used today are subject to a pre-learning phase. A large amount of data is thus processed, during algorithm design, to better guide the search and automate more easily the answers that will be offered to online surfers. It comes to deal with a combination between human intelligence and artificial intelligence. The goal still to be reached, for each organization, is a user experience that is as simple and fluid as possible. The machine learning has already made it possible to take a major step forward thanks to the ultra-segmentation of the profiles for a refined follow-up of the customer routes.

 

Sharing Data: the essence of war

In order to function at full capacity, machine learning must benefit from first-class information. How it’s possible? By adapting an omnivorous diet. Depending on the project, companies use the information they collect through cookies, geolocation, social networks, loyalty programs (which typically collect data on age, location, purchase history …).

Contrary to popular belief, consumers are rather inclined to share their data, but not at any price. This is evidenced by the Columbia Business School’s “What is the future of data sharing” study conducted by the Columbia Business School Center for Global Brand Leadership in 2015 with 8,000 Internet users in the United Kingdom, the United States, Canada, France and India. “Consumers are much more knowledgeable about the issue of data sharing than we originally suspected. According to our study, one of the determining factors in the decision to share them is trust in the brand, “says Matthew Quint, director of the Center for Global Brand Leadership. Researchers at Columbia Business School have come to the conclusion that more than 75% of Internet users more readily share their data with a brand they trust.

 

Customer data: Give and Take

Beyond trust, the sharing of information is based on a give-and-take approach. According to the same Columbia Business School study, 80% of consumers agree to share confidential information in exchange for a reward. It must be a “valuable offer, but this value must be clearly defined and easy to understand to hope for the best possible return on investment,” says Matthew Quint. Young consumers would be more favorable than their elders to concede their personal information. What promises beautiful days to machine learning.

 

All the above points ends on the same conclusion that organizations can get a better understanding and add a new layer of intelligence on their customers behavior by using predictive analysis.

Big Data: 2017 Major Trends

big data trends 2017

Over the past year, we’ve seen more and more organizations store, process and exploit their data. By 2017, systems that support a large amount of structured and unstructured data will continue to grow. The devices should enable data managers to ensure the governance and security of Big Data while giving end-users the possibility to self-analyze these data.

Here below the hot predictions for 2017.

 

The year of the Data Analyst – According to forecasts, the Data Analyst role is expected to grow by 20% this year. Job offers for this occupation have never been more numerous before. Similarly, the number of people qualified for these jobs is also higher than ever. In addition, more and more universities and other training organizations offer specialized courses and deliver diplomas and certifications.

 

Big Data becomes transparent and fast – It is obviously possible to implement machine learning and perform sentiment analysis on Hadoop, but what will be the performance of interactive SQL? After all SQL is one of powerful approach to access, analyze, and manipulate data in Hadoop. In 2017, the possibilities to accelerate Hadoop will multiply. This change has already begun, as evidenced by the adoption of high performance databases such as Exasol or MemSQL, storage technology such as Kudu, or other products enabling faster query execution.

 

The Big Data is no longer confined to Hadoop – In recent years, we have seen several technologies developing with the arrival of Big Data to cover the need to do analysis on Hadoop. But for companies with complex and heterogeneous environments, the answers to their questions are distributed across multiple sources ranging from simple file to data warehouses in the cloud, structured data stored in Hadoop or other systems. In 2017, customers will ask to analyze all their data. Platforms for data analytics will develop, while those specifically designed for Hadoop will not be deployable for all use cases and will be soon forgotten.

 

An asset for companies: The exploitation of data lakes – A data lake is similar to a huge tank, it means one needs to build a cluster to fill up the tank with data in order to use it for different purpose such as predictive analysis, machine learning, cyber security, etc. Until now only the filling of the lake mattered for organizations but in 2017 companies will be finding ways to use data gathered in their reservoirs to be more productive.

 

Internet of Objects + Cloud = the ideal application of Big Data – The magic of the Internet of Objects relies on Big Data cloud services. The expansion of these cloud services will allow to collect all the data from sensors but also to feed the analyzes and the algorithms that will exploit them. The highly secure IOT’s cloud services will also help manufacturers create new products that can safely act on the gathered data without human intervention.

 

The concentration of IoT, Cloud and Big Data generates new opportunities for self-service analysis – It seems that by 2017 all objects will be equipped with sensors that will send information back to the “mother server”. Data gathered from IoT is often heterogeneous and stored in multiple relational or non-relational systems, from Hadoop cluster to NoSQL databases. While innovations in storage and integrated services have accelerated the process of capturing information, accessing and understanding the data itself remains the final challenge. We’ll see a huge demand for analytical tools that connect natively and combine large varieties of data sources hosted in the cloud.

 

Data Variety is more important than Velocity or Volume – For Gartner Big Data is made of 3 V: Large Volume, Large Velocity, Large Variety of Data. Although these three Vs evolve, the Variety is the main driver of investment in Big Data. In 2017, analysis platforms will be evaluated based on their ability to provide a direct connection to the most valuable data from the data lake.

 

Spark and Machine Learning makes Big Data undeniable – In a survey for Data Architect, IT managers and analysts, almost 70% of respondents favored Apache Spark compared to MapReduce, which is batch-oriented and does not lend itself to interactive applications or real time processing. These large processing capabilities on Big Data environments have evolved these platforms to intensive computational uses for Machine Learning, AI, and graph algorithms. Self-service software vendor’s capabilities will be judged on the way they will enable the data accessible to users, since opening the ML to the largest number will lead to the creation of more models and applications that will generate petabytes of data.

 

Self-service data preparation is becoming increasingly widespread as the end user begins to work in a Big Data framework – The rise of self-service analytical platforms has improved the accessibility of Hadoop to business users. But they still want to reduce the time and complexity of data preparation for analysis. Agile self-service data preparation tools not only enable Hadoop data to be prepared at source, but also make it accessible for faster and easier exploration. Companies specialized in data preparation tool for Big Data end-user, such as, Alteryx, Trifacta and Paxata are innovating and consistently reducing entry barriers for those who have not yet adopted Hadoop and will continue to gain ground in 2017.

 

Data management policies in hybrid cloud’s favor – Knowing where the data come from (not just which sensor or system, but from which country) will enable governments to implement more easily national data management policies. Multinationals using the cloud will face divergent interests. Increasingly, international companies will deploy hybrid clouds with servers located in regional datacenters as the local component of a wider cloud service to meet both cost reduction objectives and regulatory constraints.

 

New safety classification systems ensures a balance between protection and ease of access- Consumers are increasingly sensitive to the way data is collected, shared, stored – and sometimes stolen. An evolution that will push to more regulatory protection of personal information. Organizations will increasingly use classification systems that organize documents and data in different groups, each with predefined rules for access, drafting and masking. The constant threat posed by increasingly offensive hackers will encourage companies to increase security but also to monitor access and use of data.

 

With Big Data, artificial intelligence finds a new field of application – 2017 will be the year in which Artificial Intelligence (AI) technologies such as automatic learning, natural language recognition and property graphs will be used routinely to process data. If they were already accessible for Big Data via API libraries, we will gradually see the multiplication of these technologies in the IT tools that support applications, real-time analyzes and the scientific exploitation of data.

 

Big Data and big privacy – The Big Data will have to face immense challenges in the private sphere, in particular with the new regulations introduced by the European Union. Companies will be required to strengthen their confidentiality control procedures. Gartner predicts for 2018 that 50% of violations of a company’s ethical rules will be data-related.

 

Sources:

Top 10 Big Data Trends 2017 – Tableau

Big Data Industry Predictions for 2017 – Inside Bigdata

Machine Learning and a powerful Customer-Service

Machine learning: powerful customer service

Every business-customer couple is looking for a certain harmony. But like any other private couple, it cannot exist without first having a strong knowledge of one another.

Monday, is your birthday. You open your emails and, wow surprise, your favorite shoe brand address their vows with a discount code. But before you go further to use this delicate attention, you note that the recipient is not good, means the mail and promo code might not be for you. And what’s more annoying than receiving a mailing from one’s favorite brand with such error? Unfortunately, this kind of mistake is not so rare in the context of written exchanges between a customer and a business. And although regrettable, this is sometimes the tip of the iceberg in terms of cutting edge of customer relationship.

 

Actually such errors in the commercial couple not only irritate, but they can also be the cause of branch of the company-client marriage contract. All loyal customers may fly away if their trusted brand is not even able to store essential information about them. Because Customers expect to be heard and acknowledged, to be treated with the utmost care and personalization, and to receive responses promptly.

As any other couple, the business couples thus have its ups and downs. However, with a little effort we can make this relation stable. And for a business, knowing a customer on the fingertips is a must.

 

Machine learning is based on algorithms that can learn from data without relying on rules-based programming. To be able to work on its commercial couple and ensure its present and future, every company therefore needs to know how to collect and operate effectively its customer data.

Development of data gathering tools, databases, behavioral segmentation techniques, connected data feedback from the field, are just as many opportunities on which it is necessary to invest in order to create a certain connection with the customer. Only by knowing the “who”, “what”, “when”, “how” and “why” about the act of buying, companies will be able to provide personalized service.

 

But so far, the collection of data alone is not the happy ending of the story. Like any old couple who knows each other by heart, if one doesn’t anticipate the desires, expectations and limits of others and does not act according, misunderstandings and conflicts takes birth. For a long term harmonious relationship, companies must therefore capitalize on these new gathered data and to do this machine learning is the best option. By describing the ability of a computer to not only calculate but to learn without being explicitly programmed, machine learning analyzes the raw data, synthesizes and then leaves it to companies to operate according to the relevance of their “data driven strategy”.

 

Duo of anticipation and empathy, a win-win for commercial couple:

Today, the customer data volumes is exploding, more data has been created in the past two years than in the entire previous history of the human race thanks to the advent of digital channels and connected communication tool for customer interactions and business have thus become very complex. Machine learning makes it possible not only to sort and keep only the essentials, but also helps to learn about what are needs, expectations and requirements of customers, so companies can anticipate their actions and harmonize the client relationship.

 

A well-synthesized insight, driven by approaches such as machine learning, can provide opportunity for any company to predict. In particular, I would say that customer-profiling based on data from touch-points can allow companies to not only determine the stage of a customer within the sales funnel, but to predict their actions and reactions in the future. While only the biggest players have access to the technological know-how to do this well right now, it’s only a matter of time before SMEs can replicate it and take advantage of the computing power already at their disposal.

 

Customer service teams are expected not only to react to requests and questions coming their way, but to also proactively anticipate customer needs. Machine learning is also anticipation. It’s a powerful tool to analyze the actions of customers and sales assistant, but also to identify some keywords used throughout their conversations to recognize problems and find within the company’s knowledge base information a solution to the problem. Companies can identify urgent request of customers and respond quickly. A bit like when it comes to detect in the long conversations THE topic which should not be overlooked and which fully deserves our attention.

 

Beyond anticipation, machine learning also increases the empathy of any business capacity. By learning from their exchanges, companies eventually learn a lot about their customers and can offer personalized services. The challenge is then to provide new goods and services by knowing from customer purchase history, to give free shipping or reductions when it comes to a loyal customer or his birthday. A professional error can happen, in that case a company must admit their mistake and do everything to compensate the client at the right time, but also must learn from their errors and take extra precautions to reduce the risk of problem in the future.

 

So Machine learning is a predictive (and increasingly prescriptive) analytics approach that teaches computers to think and solve problems like a human, continuously adapting to new information. With machine learning, you can monitor the entire customer experience to not only gain new perspective but actual guidance on the best next steps to take, because it’s virtually impossible to grow your business over time without putting the customer first. While there are plenty of tools and services that allow you to streamline aspects of marketing and customer service, be wary of letting these resources overtake your entire business model. The only way to build a profitable business is by humanizing your brand and developing lasting connections with your customer base.

 

Artificial Intelligence Techniques to detect Cyber Crimes

Artificial Intelligence Techniques to detect Cyber Crimes

When we talk about artificial intelligence, many imagine a world of science fiction where robots dominate. In reality, artificial intelligence is already improving current technologies such as online shopping, surveillance systems and many others.

 

In the area of ​​cyber security, artificial intelligence is being used via machine learning techniques. Indeed, the machine learning algorithms allow computers to learn and make predictions based on available known data. This technique is especially effective for daily process of millions of malware. According to AV-Test statistics, security analysts must examine more than 400,000 new malicious programs every day.

 

Security experts affirms that the traditional detection methods (the signature-based systems) are no longer really proactive in most cases. The task is even more difficult as, in a world dominated by copy-paste exploit cloning, security vendors must also manage third-party services, and focus on detecting the obfuscated exploit variant, to be able to provide protection to their customers. Attackers are numerous, but the automatic learning balance the chances of struggle.

 

Applying Artificial Intelligence to cyber Security: More and more technology companies and security vendors are beginning to look for ways to integrate artificial intelligence to their cyber security arsenal. Many clustering and classification algorithms can be used to quickly and correctly answer the crucial question: “This file is it healthy or malicious?” For example, if a million files must be analyzed, the samples can be divided into small groups (called clusters) in which each file is similar to the others. The security analyst only has to analyze later, a file in each group and apply the results to others.

More importantly, machine learning gets a high detection rate for new malicious software in circulation as the famous ransomware malware and zero-day, and against whom, a security solution must be as efficient as possible. In order to be practical, each machine learning classifiers used for malware detection must be set to obtain a very small amount, preferably zero, of false positives. It is also a way to form with very large databases (using the graphics processor or parallelism).

The fundamental principle of machine learning is to recognize the trends of past experiences, and make predictions based on them. This means that security solutions can react more effectively and more quickly to new invisible cyber threats compared to traditional techniques and automated cyber-attack detection systems that were used before. Artificial Intelligence is also suitable to fight against sophisticated attacks such as APT (Advanced Persistent Threats), where attackers take special care to remain undetected for indefinite periods of time.

 

Man against the machine:  breaking the boundaries between man and machine, artificial intelligence is a very important cyber weapon, but cannot alone take on any fight against cyber threats. As I’ve mentioned in previous paragraphs, the machine learning systems can get false positives, the decision of a human is needed to sort algorithms with appropriate data.

Les algorithmes d’apprentissage automatique sont, dans l’ensemble, plus précis dans l’évaluation des menaces potentielles de malwares au sein de grandes quantités de données de renseignement, que leurs homologues humains. Ils savent aussi repérer plus rapidement les intrusions.

The machine learning algorithms are, overall, more accurate in assessing potential malware threats in large quantities of intelligence data, than humans. They also know how to quickly detect breach. The current hybrid approach that is generally used today is to oversee automatic learning by human analysts. This allowed better results so far.

 

Regarding the future of AI, it is almost impossible to predict the future. Who knows that may be next year, machine learning will most likely focus on the creation of specific profiles for each user. Where an action or a user’s behavior does not correspond to the predefined templates, the user will be informed. For example, a peak of downloads in a short time will be marked as suspect, and analyzed closely by a human expert.

IoT: Biggest Revolution in Retail

If the IoT represents a huge opportunity for almost every facet of the business, this is particularly true for supply chain specialists, operations and analysis. The leaders of e-commerce and traditional commerce see an opportunity of competitive advantage in IoT.

 

Even though I’ve already wrote about IoT in my previous posts, let me give you again a quick definition of it. In 1999, Kevin Ashton (MIT Auto-ID Center) describes the Internet of Things as a network of interconnected objects that generates data without any human intervention. Today, Gartner describes the IoT as “the network of physical objects containing embedded technology to communicate, detect or interact with their internal states or the external environment.”

 

estimates for IoT revenue by region in 2020

For some IoT is only a new name of an old concept, the only thing which has recently changed in this existing concept, is the evolution of Cloud technology. According to a recent survey by Gartner, IoT is one of the fastest-growing technological trend. Estimation says that by 2020, the number of connected objects will be multiplied by 26 to $ 30 billion. Main reason behind IoT success is the development of solutions based in clouds; which allows to actually have access to the data generated by the connected objects.

 

The growth of IoT relies on three levers: reduction in integrated chips costs, technologies supported by a cloud platform and powered by analyzing Big Data and finally the Machine Learning. A case study of IBM named “The smarter supply chain of the future” revels that in near future the entire supply chain will be connected – not just customers, suppliers and IT systems in general, but also parts, products and other smart objects used to monitor the supply chain. Extensive connectivity will enable worldwide networks of supply chains to plan and make decisions together.

 

The main objective of such connective supply chain is to gain better visibility and to reduce the impact of volatility in all stages of the chain and get better returns by being more agile product flow. Several developments are already underway in the IoT and are revolutionizing the retail supply chain at various levels:

 

At the client side: integration of end consumer in the IoT. The main objective of this step is to collect customer data to create customized product, personalized offers while simplifying the purchasing process. Devices such as health trackers, connected watches etc. continuously collect the data from consumers, prescribers. The collected data represents a great opportunity of positioning product/services. For example, from a person’s browsing history, its culinary tastes and influences on social networks, information on a nutrition bar can be offered to him. Recommendations may also be appropriate if the person enrolled in a sports club or acquired a fitness tracker and so on.

 

As for retailers: Beyond the preparation of the assortment by merchants, there are smart shelves and organization of sales outlet. Moreover, we are witnessing a rapidly changing purchasing behavior so with smart shelves a retailer’s system can analyze inventory, capacity and shipment information sent by suppliers. Via the predicted system retailers and suppliers can avoid costly out-of-stocks or missed sales.

To take the example of nutrition bar, time spent in front of a specific category of products (yogurt lightened for example) can be an early indicator to change suggestions or promotions. In addition, the integration of the retail IoT can allow the line to automatically trigger orders. The whole environment can be configured to access a library of planograms, to store inventory data and related warehouses to automatically run restocking. As the elements of this environment are already used independently, we can predict that we are at the dawn of IoT in retail.

 

If the store are at a less advanced stage in the application of IoT, transportation and warehousing are well connected. The integration of RFID shows a first generation data-oriented machine. Integrated tracking systems have long been used in transport and warehouse systems. RFID tagging of pallets has to have better visibility on the status of stocks and the location. The convergence of demand signals and increased visibility on the state of stocks and their location results in scenarios such as the anticipated shipment for which Amazon has filed a patent. Increasing integration of IoT can lead to efficient use of robots for material handling and delivery by drones. These innovations are challenging the effectiveness of existing systems by optimizing the machine learning an effective alternative.

 

Even with all the benefits it promises to offer companies, IoT is still a gamble, with big risks and unsolved problems. For any organization that decided to embark on the IoT, a number of questions remain open whether in technology, integration with file distribution systems to traditional ERP API to communicate with sensors and application languages ​​(Python, ShinyR, et AL.)

 

There are several interfaces that work well in specific areas, but it needs more standardized platforms. Industry experts have launched PaaS (Platform as a Service) to integrate this growing IoT technology. Despite these challenges, the technology seems a surmountable obstacle. Only the legislation on collected data is a real problem so far. Even the customer acceptance remains a challenge. In 2013, Nordstorm had to backtrack on his program which was to track customer movements by the Wi-Fi use on smartphones and via video analysis due to customers demand.

 

Finally, the important thing to remember is that the IoT is a revolutionary technology. A lot of expert retailers, e-commerce players and technology solutions providers will rethink and adapt the model and evolve in processes designed for organizations wishing to adopt the IoT. Retailers that take the lead in this space stand to gain an important advantage in an already competitive environment. Early adopters will be positioned to more quickly deliver IoT-enabled capabilities that can increase revenue, reduce costs and drive a differentiated brand experience. The IoT will be a disruptive force in retail operations.

 

 

Sources:

The Smarter Supply Chain Of The Future

The CEO Perspective: IOT for Retail Top Priorities to build a Successful Strategy

Machine Learning: An Artificial Intelligence approach

machine learning

 

I’ve heard a lot of people saying that Machine Learning is nothing else than a synonymous of Artificial Intelligence but that’s not true at all. The reality is that Machine Learning is just one approach to AI (in fact it’s called the statistic approach).

 

Let me first give a definition of Machine Learning. It’s a type of artificial intelligence that gives computers the ability to learn to do stuff via different algorithms. On the other hand Artificial Intelligence is used to develop computer programs that perform tasks that are normally performed by human by giving machines (robots) the ability to seem like they have human intelligence.

 

If you are wondering what it means for a machine to be intelligent, it’s clear that “learning” is the KEY issue. Stuffing a lot of knowledge into a machine is simply not enough to make it intelligent. So before going far in the article, you must know that in the field of Artificial Intelligence, there are 2 main approaches about how to program a machine so it can perform human tasks. We’ve a Statistical Approach (also known as probabilistic) and Deterministic Approach. None of these two approach are superior to the other, they are just used in different cases.

 

The Machine Learning (=Statistic AI) is based on, yes you’ve guessed right, statistics. It’s a process where the AI system gather, organize, analyze and interpret numerical information from data. More and more industries are applying AL to process improvement in the design and manufacture of their products.

 

There’ll be around 5 to 20 billion connected devices within 3 years and so many capture points will be used to make live decisions, to recommend, provide real-time information and detect weak signals or plan of predictive maintenance. Whether it’s at the level of business uses, the sectors of industry and services (health, distribution, automotive, public sector …) or even the use of Business Intelligence, everything is changing! With the Machine Learning and voice recognition technology based on AI, even the Big Data technology might be quickly overtaken by real-time information.

 

In a preview of an upcoming e-book, “AI & Machine Learning”, UMANIS talks about The Data, machinery and men. In the e-book they have elaborated problems and expectations that different companies are facing in the technological era.

 

Based on the responses of 58 participants who responded to the survey “AI & Machine Learning”, here below you’ll find identified trends and indicators.

 

  • 44% of companies believes that AI and Machine Learning have become essential and latest trend in various fields including education, healthcare, the environment and business sector,
  • One company on two is curious about the technological innovations in order to understand the collection of data (via machine)
  • 1/3 of companies are currently on standby on AL & Machine Learning topics,
  • 21% of IT decision makers were informed about Cortana suites (Microsoft) and Watson (IBM)
  • 36% want to go further on this type of technology,
  • 88% are planning to launch an AL project within more than 6 months,
  • 50% of respondents are unaware of the purpose of these technologies in the company.

 

TOP 5 issues:

  • Detect abnormalities
  • Using machine learning to optimize the automation
  • Integrating a Learning Machine module into an existing SI
  • Remodeling of the real-time Data architecture to gather big volumes with high computing power
  • Finding a permanent solution of storage and backup of the collected data

 

There’s no doubt that machine learning area is booming. It can be applied to high volumes of data to obtain a more detailed understanding of the implementation process and to improve decision making in manufacturing, retail, healthcare, life sciences, tourism, hospitality, financial services and energy. The machine learning systems can make precise result predictions based on data from training or past experiences. By gathering relevant information for making more accurate decisions, machine learning systems can help manufacturers improve their operations and competitiveness.