Variable selection typically involves choosing among a large number of models, so that fast computation of Bayes factors is highly desirable. This desideratum has made common practice the use of g-priors and Laplace expansions, specially in large dimensions. It is well known, however, that priors with heavier tails often result in better performance for model selection. In this thesis, we use the Conventional approach of Jeffreys (1961) and generalize some ideas in Strawderman (1971, 1973) and Berger (1976, 1980, 1985) to propose a prior distribution for vari- able selection. We show that this choice is, to the best of our knowl- edge, the first proposal for variable selection which is fully justified from a theoretical point of view. This justification is heavily based on the invariance ideas in Berger et al. (1998). Moreover, it has Student-like tails and many optimal properties for model selection. It also generalizes previous proposals in the literature. In addition, for specific choices of the hyper-parameters, it produces closed-form marginal likelihoods (and hence, Bayes factors). We demonstrate its behavior in a couple of small problems and in a couple of large, but enumerable, ones.