Recently, major attention has been given to penalized log-likelihood estimators for sparse precision (inverse covariance) matrices. The penalty is responsible for inducing sparsity, and a very common choice is the convex l1 norm. However, it is not always the case that the best estimator is achieved with this penalty. So, to improve sparsity and reduce biases associated with the l1 norm, one must move to non-convex penalties such as the lq (0 ≤ q < 1). In this paper we introduce the resulting non-concave lq penalized log-likelihood problem, and derive the corresponding optimality conditions. A novel cyclic descent algorithm is presented for penalized log-likelihood optimization, and we show how the derived conditions can be used to reduce algorithm computation. We illustrate by comparing reconstruction quality over the range 0 ≤ q ≤ 1 for several experiments.