Introduction
There are numerous machine learning topics worth mentioning, but there is a limit to the time and focus any single person has. In the following paragraphs, 4 interesting topics will be mentioned and briefly explained for interested readers.
The topics that will be gone over are:
· Generative Adversarial Network (GANs)
. Federated Learning
· Natural Language Processing
. Transfer Learning
Readers will get to read about real/fake data (generation of data and making it detailed enough to be perceived as real data) in one case and using the knowledge from one model to complete a second related task.
Generative Adversarial Network (GANs)
GANs are generative models that create new data instances that are similar to the training data. It has two parts:
· A discriminator part distinguishes between the real and fake data of the generator.
Both parts are separate neural networks.
The discriminator is trained on two kinds of data, real data from a real-world dataset (think of them as positive examples of data entered to the system) and from fake data generated by the generator network (think of them as negative examples of data which the discriminator must detect and classify as fake/wrong).
The generator is trained in a more complex way. It has to be penalized when it makes a mistake (when the discriminator understands that the data is fake) so it has to take the previous example into account and make it more detailed. It starts by taking random input which then goes through the generator neural network where that original input is translated into a set of data.
Next, it goes through the discriminator which tries to detect if the data is real or fake. The discriminator returns an answer. The generator loss is calculated and re-calculated after each output and each time the discriminator returns fake it is taken into account and the generator learns through its mistakes.
There are a number of GANs variations. Namely:
· Conditional GANs
· Image-to-Image Translation
· Cycle GAN
· Face Inpainting
· Many more (Text to image synthesis, super-resolution, etc)
Federated Learning
It is also known as collaborative learning. It is a technique that trains an algorithm across a number of decentralized edge servers that hold local data samples, without any exchange of them. This approach stands in contrast to centralized machine-learning techniques where all datasets are uploaded to a single server, as well as to more common decentralized ways that assume that local data samples are distributed in the same way.
The general idea is to train local models on local data samples and for there to be an exchange of the parameters (weights & biases of a neural network) only between the local nodes to generate a global model shared by all nodes.
Natural Language Processing (NLP)
Natural Language Processing is a field in Machine Learning where the computer learns to understand, analyze, manipulate, and generate human language.
Some of the problems faced are natural language understanding or speech recognition.
Some of The most common techniques used are:
- Using word embeddings to gather semantic meanings and understandings of words
- End-to-end learning of a higher-level workload instead of having to rely on a number of tasks.
What are some common tasks/steps which are completed in NLP?
They are:
- Text and/or speech processing, for example, Speech recognition
- Morphological analysis, for example, simplifying the text to root forms or lemmatization of words
- Syntactic analysis, for example, parsing
- Lexical semantics, for example, sentiment analysis or the meaning of individual words in a certain context
- Relational semantics, for example how things are connected in a sentence or individuals’ relationships in a certain text
- Discourse, for example how to divide a text into different topics and understand what the certain topic is
- Higher level NLP applications, for example, text summarization, etc.
Transfer learning
Transfer learning is a research topic in the field of Machine Learning where one problem is solved and the knowledge from that problem is saved and later applied to a second problem that is related to the first one.
Traditionally in ML each task is separate, so each model is trained separately and completes its task in isolation that way the knowledge doesn’t have to be retained or remembered for future tasks.
In transfer learning completing a new task can be done with knowledge gathered from a previously completed a task that is somehow connected to the task at hand. This makes for a faster learning curve and can require less data for the new model to be trained on.
The idea of this can be adapted in a couple of ways, that is by creating or finding a ready trained network for a certain original problem, then adapting it to the task that is to be completed. The task requirements can be in the same domain and the task can be slightly different or it can be the same task in a different domain.
Conclusions
There are many topics worth looking into in the world of Machine Learning and Data Science depending on what each person’s projects are all about.
In this article 4 was mentioned with a diagram of how the workflow of each of the models look like to be able to visualize and understand the problem more clearly.
There were two types mentioned which use cooperation in one case or opposition in the other, showing how diverse and how wide the range of applications can be in the Machine Learning and Deep Learning world.
ConversionConversion EmoticonEmoticon