In this brief, based on the method of penalty functions, a recurrent neural network (NN) modeled by means of a differential inclusion is proposed for solving the bilevel linear programming problem (BLPP). Compared with the existing NNs for BLPP, the model has the least number of state variables and simple structure. Using nonsmooth analysis, the theory of differential inclusions, and Lyapunov-like method, the equilibrium point sequence of the proposed NNs can approximately converge to an optimal solution of BLPP under certain conditions. Finally, the numerical simulations of a supply chain distribution model have shown excellent performance of the proposed recurrent NNs.
In this brief, considering the inertial term into first order neural networks(NNs), an inertial NN(INN) modeled by means of a differential inclusion is proposed for solving linear complementarity problem with P-0 matrix. Compared with existing NNs, the presence of the inertial term allows us to overcome some drawbacks of many NNs, which are constructed based on the steepest descent method, and this model is more convenient for exploring different optimal solution. It is proved that the proposed NN is stable in the sense of Lyapunov and any equilibrium of our NN is the optimal solution of LCP with P-0 matrix. Simulation results on two numerical examples show the effectiveness and performance of the proposed neural network.