Skip to main content

Featured Post

Top 10 Advance Java Interview questions?

Top 10 Advance Java Interview questions?   What are the differences between abstract classes and interfaces in Java? What is the difference between ArrayList and LinkedList in Java? What is the purpose of the finalize() method in Java? What is polymorphism in Java and how is it achieved? What are the different types of inner classes in Java? What is the difference between static and non-static methods in Java? What are the different types of exceptions in Java and how do they differ? What is the difference between checked and unchecked exceptions in Java? How does Java handle multithreading and synchronization? What are the different types of JDBC drivers in Java and how do they differ?

3 TYPES OF MACHINE LEARNING?

3 TYPES OF  MACHINE LEARNING? 

There is a broad body of research in AI, Machine Learning much of which feeds into and complements each other.
Currently enjoying something of a resurgence, machine learning is where a computer system is fed large amounts of data, which it then uses to learn how to carry out a specific task, such as understanding speech or captioning a photograph.
As mentioned, machine learning is a subset of AI and is generally split into two main categories: supervised and unsupervised learning.
1. Supervised learning A common technique for teaching AI systems is by training them using a very large number of labeled examples. These machine-learning systems are fed huge amounts of data, which has been annotated to highlight the features of interest. These might be photos labeled to indicate whether they contain a dog or written sentences that have footnotes to indicate whether the word 'bass' relates to music or a fish. Once trained, the system can then apply these labels can to new data, for example to a dog in a photo that's just been uploaded.
This process of teaching a machine by example is called supervised learning and the role of labeling these examples is commonly carried out by online workers, employed through platforms like Amazon Mechanical Turk.
See also: How artificial intelligence is taking call centers to the next level
Training these systems typically requires vast amounts of data, with some systems needing to scour millions of examples to learn how to carry out a task effectively -- although this is increasingly possible in an age of big data and widespread data mining. Training datasets are huge and growing in size -- Google's Open Images Dataset has about nine million images, while its labeled video repository YouTube-8M links to seven million labeled videos.ImageNet, one of the early databases of this kind, has more than 14 million categorized images. Compiled over two years, it was put together by nearly 50,000 people -- most of whom were recruited through Amazon Mechanical Turk -- who checked, sorted, and labeled almost one billion candidate pictures.
In the long run, having access to huge labeled datasets may also prove less important than access to large amounts of computing power.
In recent years, Generative Adversarial Networks ( GANs) have shown how machine-learning systems that are fed a small amount of labeled data can then generate huge amounts of fresh data to teach themselves.
This approach could lead to the rise of semi-supervised learning, where systems can learn how to carry out tasks using a far smaller amount of labeled data than is necessary for training systems using supervised learning today.
2. Unsupervised learning
In contrast, unsupervised learning uses a different approach, where algorithms try to identify patterns in data, looking for similarities that can be used to categories that data.
An example might be clustering together fruits that weigh a similar amount or cars with a similar engine size.
The algorithm isn't setup in advance to pick out specific types of data, it simply looks for data that can be grouped by its similarities, for example Google News grouping together stories on similar topics each day.
3. Reinforcement learning
A crude analogy for reinforcement learning is rewarding a pet with a treat when it performs a trick.
In reinforcement learning, the system attempts to maximize a reward based on its input data, basically going through a process of trial and error until it arrives at the best possible outcome.
An example of reinforcement learning is Google DeepMind's Deep Q-network, which has been used to best human performance in a variety of classic video games. The system is fed pixels from each game and determines various information, such as the distance between objects on screen.


By also looking at the score achieved in each game the system builds a model of which action will maximize the score in different circumstances, for instance, in the case of the video game Breakout, where the paddle should be moved to in order to intercept the ball.


Comments

Post a Comment

Popular posts from this blog

WHAT ARE NEURAL NETWORKS? | Comingfly

WHAT ARE NEURAL NETWORKS ? Neural Networks the process of machine learning are neural networks. These are brain-inspired networks of interconnected layers of algorithms, called neurons, that feed data into each other, and which can be trained to carry out specific tasks by modifying the importance attributed to input data as it passes between the layers. During training of these neural networks, the weights attached to different inputs will continue to be varied until the output from the neural network is very close to what is desired, at which point the network will have 'learned' how to carry out a particular task. A subset of machine learning is deep learning, where neural networks are expanded into sprawling networks with a huge number of layers that are trained using massive amounts of data. It is these deep neural networks that have fueled the current leap forward in the ability of computers to carry out task like speech recognition and computer vision. T here are vario...

What is Go Language ? | Comingfly

What is Go Language ? What Go Language? :- v    Go is a modern, general purpose language. v    A new ,Open source Programming language. v    Concurrent , Builds fast at scale. v    Go language is very fast. v    Garbage-Collected. v    Lightweight syntax. v    Simple type system. v    Productive. v    Simple. v    Fun History of Go language :- Go language developed by Rob Pike   and Ken Thompson. More than 250 contributors join the project.   Design began late in 2007.   Become open source in 2009.   Version is 1.2.2. Feature of Go language ? v   In the current world, languages don't help enough. v   Computers fast but software construction slow. v   Efficiency. v   Safety. v   Speed. v   Concurrency. v   Scalability. v   Fast development cycle. v   No surprises. v   Multi-core Why Developed a Language ? Google has bi...

What is Java? | Comingfly

What is Java? Developed by Sun Microsystems in the mid-1990s, Java was originally built to be a high-level and object-oriented programming language that looks and feels similar to C++. Along with being extremely popular, Java can implement a wide variety of algorithms, which are very useful to the machine learning community. Java is regarded as one of the most secure programming languages, largely due to its use of bytecode and sandboxes. Java manages to harness much of the power of C++ while overcoming its security issues and overall complexity. But with all of its improvements over C++, Java has a reputation for being slower than many other programming languages. Additionally, as of 2019, Java has implemented commercial licensing for certain business applications, which can be costly.