Computational Complexity of Neural Network Understanding

Speaker: 

Time: 

Friday, 27 December 2019, 17:15 to 18:15

Venue: 

  • A-201 (STCS Seminar Room)

Abstract: We study the complexity of simulating a neural network. We simplified the model of a neural network as a directed graph $G(V,E)$, where the nodes represent the neurons. The neurons have two kinds of behavior, stimulated or not stimulated. A neuron stimulates if the stimulation states of its input neurons satisfy the neuron.

We show that it is PSPACE-hard even to simulate this simpler neural network model. We show that if we have given a neural network $G(V,E)$, an input neuron $I \in V$ and an output neuron $O \in V$ then it is PSPACE-hard to answer whether the stimulation of the input neuron stimulates the output neuron in the due course of time. This is true even for neural networks whose degree is bounded by 9.

We show that it is PSPACE-hard to find a minimal degenerate circuit of a neural network. It is also PSPACE-hard to approximate the minimum degenerate circuit up to a logarithmic factor.

We show that it is PSPACE-hard to find a set of 1-Vital set.

This is joint work with Piyush Srivastava.

Prerequisite: Basic understanding of the complexity classes and reduction.