Top 10 Advance Java Interview questions? What are the differences between abstract classes and interfaces in Java? What is the difference between ArrayList and LinkedList in Java? What is the purpose of the finalize() method in Java? What is polymorphism in Java and how is it achieved? What are the different types of inner classes in Java? What is the difference between static and non-static methods in Java? What are the different types of exceptions in Java and how do they differ? What is the difference between checked and unchecked exceptions in Java? How does Java handle multithreading and synchronization? What are the different types of JDBC drivers in Java and how do they differ?
WHAT ARE NEURAL NETWORKS ?
Neural Networks the process of machine learning are neural networks. These are brain-inspired networks of interconnected layers of algorithms, called neurons, that feed data into each other, and which can be trained to carry out specific tasks by modifying the importance attributed to input data as it passes between the layers. During training of these neural networks, the weights attached to different inputs will continue to be varied until the output from the neural network is very close to what is desired, at which point the network will have 'learned' how to carry out a particular task.
A subset of machine learning is deep learning, where neural networks are expanded into sprawling networks with a huge number of layers that are trained using massive amounts of data. It is these deep neural networks that have fueled the current leap forward in the ability of computers to carry out task like speech recognition and computer vision.
There are various types of neural networks, with different strengths and weaknesses. Recurrent neural networks are a type of neural net particularly well suited to language processing and speech recognition, while convolutional neural networks are more commonly used in image recognition. The design of neural networks is also evolving, with researchers recently refining a more effective form of deep neural network called long short-term memory or LSTM, allowing it to operate fast enough to be used in on-demand systems like Google Translate.
Another area of AI research is evolutionary computation, which borrows from Darwin's famous theory of natural selection, and sees genetic algorithms undergo random mutations and combinations between generations in an attempt to evolve the optimal solution to a given problem.
This approach has even been used to help design AI models, effectively using AI to help build AI. This use of evolutionary algorithms to optimize neural networks is called revolutionize, and could have an important role to play in helping design efficient AI as the use of intelligent systems becomes more prevalent, particularly as demand for data scientists often outstrips supply. The technique was recently showcased by Uber AI Labs, which released papers on using genetic algorithms to train deep neural networks for reinforcement learning problems.
Finally there are expert systems, where computers are programmed with rules that allow them to take a series of decisions based on a large number of inputs, allowing that machine to mimic the behavior of a human expert in a specific domain. An example of these knowledge-based systems might be, for example, an autopilot system flying a plane.
Wow
ReplyDeleteWow awesome post
ReplyDeleteWow
ReplyDeleteWow
ReplyDeleteWwwow
ReplyDeleteWow
ReplyDelete