Abduction is a well-known reasoning approach to compute plausible explanations for an observation. It has recently been employed to explain the machine learning prediction of samples from a data set by generating subset-minimal or cardinality-minimal explanations with respect to features. In this paper, we study some complexity properties of such minimal explanations in explaining predictions of neural networks. This paper also extends existing works by proposing a randomized subset-minimal procedure as a strategy to compute subset-minimal explanations. The experiment results on a number of benchmarks validate that the resulting explanations are generally smaller than subset-minimal ones. On the other hand, this strategy is not as expensive as computing cardinality-minimal explanations. It thus serves as a trade-off between the existing strategies of cardinality-and subset-minimal explanations.