Confused about your exam or college applications?
Be the First to Know
Get Access to Latest Updates
The researchers from the Indian Institute of Technology - IIT Hyderabad have developed a system by which, the Inner Workings of an Artificial Intelligence (AI) System can be checked. It will be helpful to study AI by understanding its models in causal attributes. The Developed model is called ANN or Artifical Neural Network.
The ANN is a set of AI Models and Programmes developed to mimic human brain to help machines make decisions like humans. The modern ANN programmes which are also known as Deep Learning has been developed immensely and it has grown more complex. The system can learn by itself assessing the data inputs given to them. It often acts similar to human brains in many cases. However, this is unknown how they come to a decision. Thus, the decision given by it cannot be backed by logic which makes them less useful.
The work to develop the ANN has been done by the IIT Hyderabad’s Associate Professor Dr Vineeth N Balasubramanian of Computer Science & Engineering Department with his three students Aditya Chattopadhyay, Anirban Sarkar and Piyushi Manupriya. The 36th International Conference on Machine Learning has published the work and findings of the IIT Hyderabad researchers. The Conference is considered to be one of the top conferences worldwide for Artificial Intelligence.
While talking about the research, Dr. Balasubramanian said that the simplest applications of Deep Learning includes machine translation, face recognition and speech recognition. The technology is being used to develop voice-enabled controls for day-to-day consumer devices like Mobile Phones, Smart TVs, Tablets, Personal Computer and Smart Homes. Sectors like finance, engineering, artificial perception, control and simulation are using this technology’s newly invented algorithms. He added that even though there are challenges left to be met, the recent researches have impressed everyone.
‘Interpretability problem’ is being regarded as the issue in implementing Deep Learning in risk-sensitive real-life problems. Due to their complex working systems, and multiple layers, they become virtual black boxes which are difficult to be deciphered easily. If not impossible, the troubleshooting becomes indeed tough if any problem arises with the Deep Learning Algorithm.
The problem is that the DL algorithm data is trained with a small amount of data which is indeed different than the real-world information. There can also be human errors in the training processes which can cause issues later. So, there is always a need to have a system that can access the underbelly of the AI systems and assess problems in structural and foundation level. With a casual interface of ANN, the IIT Hyderabad researchers have tried to address this problem. This model is known as the ‘Structural Causal Model.’
The professor has thanked his students Chattopadhyay, Sarkar and Manupriya for working extensively in this project. To compute the ‘Average Causal Effect’ of an input-neuron on an output-neuron, the team of Dr. Balasubramanian has proposed a new method. He said that it is important to know which ‘casual’ input method is responsible for output according to acceptable parameters. He informed that their research provides the tool to identify the causal inputs responsible for certain effects.
As the ethics regarding AI is growing, there is an increase in the awareness regarding methods of Deep Learning. The research and their findings are uploaded online as well. The IIT Hyd researchers have released the DL methods free for downloading at https://piyushi-0.github.io/ACE/.The research papers are also uploaded in http://proceedings.mlr.press/v97/chattopadhyay19a.html and https://arxiv.org/abs/1902.02302.
Follow CollegeDekho for more News on Artifical Intelligence technologies and developments.